Test Report: KVM_Linux_crio 19763

                    
                      aa5eddb378ec81f2e43c808f5204b861e96187fd:2024-10-07:36541
                    
                

Test fail (19/228)

x
+
TestAddons/serial/GCPAuth/PullSecret (480.65s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-246818 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-246818 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9d331845-59f4-4092-938c-97591d81951b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:627: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:627: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-246818 -n addons-246818
addons_test.go:627: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-07 11:43:01.66217174 +0000 UTC m=+700.280755746
addons_test.go:627: (dbg) Run:  kubectl --context addons-246818 describe po busybox -n default
addons_test.go:627: (dbg) kubectl --context addons-246818 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-246818/192.168.39.141
Start Time:       Mon, 07 Oct 2024 11:35:01 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.22
IPs:
IP:  10.244.0.22
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6r7hg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6r7hg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/busybox to addons-246818
Normal   Pulling    6m37s (x4 over 8m)      kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m37s (x4 over 8m)      kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m37s (x4 over 8m)      kubelet            Error: ErrImagePull
Warning  Failed     6m7s (x6 over 7m59s)    kubelet            Error: ImagePullBackOff
Normal   BackOff    2m48s (x20 over 7m59s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:627: (dbg) Run:  kubectl --context addons-246818 logs busybox -n default
addons_test.go:627: (dbg) Non-zero exit: kubectl --context addons-246818 logs busybox -n default: exit status 1 (74.787484ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:627: kubectl --context addons-246818 logs busybox -n default: exit status 1
addons_test.go:629: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (152.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-246818 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-246818 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-246818 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d86b2c09-e064-4560-be78-a763c6b35ac1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d86b2c09-e064-4560-be78-a763c6b35ac1] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004224418s
I1007 11:43:42.855605  384271 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-246818 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.95780035s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-246818 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.141
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-246818 -n addons-246818
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-246818 logs -n 25: (1.370116239s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-243020 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | -p download-only-243020              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| delete  | -p download-only-243020              | download-only-243020 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| start   | -o=json --download-only              | download-only-257663 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | -p download-only-257663              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| delete  | -p download-only-257663              | download-only-257663 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| delete  | -p download-only-243020              | download-only-243020 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| delete  | -p download-only-257663              | download-only-257663 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| start   | --download-only -p                   | binary-mirror-827339 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | binary-mirror-827339                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38787               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-827339              | binary-mirror-827339 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| addons  | enable dashboard -p                  | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | addons-246818                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | addons-246818                        |                      |         |         |                     |                     |
	| start   | -p addons-246818 --wait=true         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:34 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:34 UTC | 07 Oct 24 11:34 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | -p addons-246818                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | -p addons-246818                     |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| ip      | addons-246818 ip                     | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-246818 addons                 | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ssh     | addons-246818 ssh curl -s            | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| ip      | addons-246818 ip                     | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:45 UTC |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:31:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:31:34.116156  384891 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:31:34.116270  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:31:34.116277  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:31:34.116282  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:31:34.116469  384891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 11:31:34.117144  384891 out.go:352] Setting JSON to false
	I1007 11:31:34.118102  384891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4440,"bootTime":1728296254,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:31:34.118176  384891 start.go:139] virtualization: kvm guest
	I1007 11:31:34.120408  384891 out.go:177] * [addons-246818] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:31:34.122258  384891 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 11:31:34.122285  384891 notify.go:220] Checking for updates...
	I1007 11:31:34.124959  384891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:31:34.126627  384891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:31:34.128213  384891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:31:34.129872  384891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:31:34.131237  384891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:31:34.132940  384891 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:31:34.166945  384891 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 11:31:34.168406  384891 start.go:297] selected driver: kvm2
	I1007 11:31:34.168430  384891 start.go:901] validating driver "kvm2" against <nil>
	I1007 11:31:34.168446  384891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:31:34.169281  384891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:31:34.169397  384891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:31:34.186640  384891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:31:34.186710  384891 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:31:34.186981  384891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:31:34.187031  384891 cni.go:84] Creating CNI manager for ""
	I1007 11:31:34.187088  384891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:31:34.187116  384891 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 11:31:34.187194  384891 start.go:340] cluster config:
	{Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:31:34.187319  384891 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:31:34.189414  384891 out.go:177] * Starting "addons-246818" primary control-plane node in "addons-246818" cluster
	I1007 11:31:34.191135  384891 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:31:34.191199  384891 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 11:31:34.191215  384891 cache.go:56] Caching tarball of preloaded images
	I1007 11:31:34.191343  384891 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 11:31:34.191358  384891 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:31:34.191753  384891 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/config.json ...
	I1007 11:31:34.191788  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/config.json: {Name:mk8ac1a8a8e3adadfd093d5da0627d5b3cabf0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:31:34.191973  384891 start.go:360] acquireMachinesLock for addons-246818: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 11:31:34.192039  384891 start.go:364] duration metric: took 47.555µs to acquireMachinesLock for "addons-246818"
	I1007 11:31:34.192065  384891 start.go:93] Provisioning new machine with config: &{Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:31:34.192185  384891 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 11:31:34.194346  384891 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1007 11:31:34.194555  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:31:34.194629  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:31:34.210789  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I1007 11:31:34.211351  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:31:34.211942  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:31:34.211966  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:31:34.212395  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:31:34.212604  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:34.212831  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:34.213029  384891 start.go:159] libmachine.API.Create for "addons-246818" (driver="kvm2")
	I1007 11:31:34.213068  384891 client.go:168] LocalClient.Create starting
	I1007 11:31:34.213129  384891 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 11:31:34.455639  384891 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 11:31:34.570226  384891 main.go:141] libmachine: Running pre-create checks...
	I1007 11:31:34.570260  384891 main.go:141] libmachine: (addons-246818) Calling .PreCreateCheck
	I1007 11:31:34.570842  384891 main.go:141] libmachine: (addons-246818) Calling .GetConfigRaw
	I1007 11:31:34.571323  384891 main.go:141] libmachine: Creating machine...
	I1007 11:31:34.571338  384891 main.go:141] libmachine: (addons-246818) Calling .Create
	I1007 11:31:34.571502  384891 main.go:141] libmachine: (addons-246818) Creating KVM machine...
	I1007 11:31:34.572696  384891 main.go:141] libmachine: (addons-246818) DBG | found existing default KVM network
	I1007 11:31:34.573525  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:34.573329  384913 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000115200}
	I1007 11:31:34.573556  384891 main.go:141] libmachine: (addons-246818) DBG | created network xml: 
	I1007 11:31:34.573571  384891 main.go:141] libmachine: (addons-246818) DBG | <network>
	I1007 11:31:34.573580  384891 main.go:141] libmachine: (addons-246818) DBG |   <name>mk-addons-246818</name>
	I1007 11:31:34.573590  384891 main.go:141] libmachine: (addons-246818) DBG |   <dns enable='no'/>
	I1007 11:31:34.573600  384891 main.go:141] libmachine: (addons-246818) DBG |   
	I1007 11:31:34.573610  384891 main.go:141] libmachine: (addons-246818) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 11:31:34.573622  384891 main.go:141] libmachine: (addons-246818) DBG |     <dhcp>
	I1007 11:31:34.573632  384891 main.go:141] libmachine: (addons-246818) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 11:31:34.573640  384891 main.go:141] libmachine: (addons-246818) DBG |     </dhcp>
	I1007 11:31:34.573647  384891 main.go:141] libmachine: (addons-246818) DBG |   </ip>
	I1007 11:31:34.573659  384891 main.go:141] libmachine: (addons-246818) DBG |   
	I1007 11:31:34.573670  384891 main.go:141] libmachine: (addons-246818) DBG | </network>
	I1007 11:31:34.573677  384891 main.go:141] libmachine: (addons-246818) DBG | 
	I1007 11:31:34.579638  384891 main.go:141] libmachine: (addons-246818) DBG | trying to create private KVM network mk-addons-246818 192.168.39.0/24...
	I1007 11:31:34.649044  384891 main.go:141] libmachine: (addons-246818) DBG | private KVM network mk-addons-246818 192.168.39.0/24 created
	I1007 11:31:34.649094  384891 main.go:141] libmachine: (addons-246818) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818 ...
	I1007 11:31:34.649118  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:34.648912  384913 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:31:34.649140  384891 main.go:141] libmachine: (addons-246818) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 11:31:34.649156  384891 main.go:141] libmachine: (addons-246818) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 11:31:34.924379  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:34.924203  384913 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa...
	I1007 11:31:35.127437  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:35.127261  384913 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/addons-246818.rawdisk...
	I1007 11:31:35.127475  384891 main.go:141] libmachine: (addons-246818) DBG | Writing magic tar header
	I1007 11:31:35.127490  384891 main.go:141] libmachine: (addons-246818) DBG | Writing SSH key tar header
	I1007 11:31:35.127501  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:35.127388  384913 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818 ...
	I1007 11:31:35.127525  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818
	I1007 11:31:35.127537  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 11:31:35.127548  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818 (perms=drwx------)
	I1007 11:31:35.127558  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 11:31:35.127564  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 11:31:35.127603  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:31:35.127639  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 11:31:35.127648  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 11:31:35.127657  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 11:31:35.127665  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins
	I1007 11:31:35.127678  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home
	I1007 11:31:35.127691  384891 main.go:141] libmachine: (addons-246818) DBG | Skipping /home - not owner
	I1007 11:31:35.127708  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 11:31:35.127726  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 11:31:35.127740  384891 main.go:141] libmachine: (addons-246818) Creating domain...
	I1007 11:31:35.128819  384891 main.go:141] libmachine: (addons-246818) define libvirt domain using xml: 
	I1007 11:31:35.128847  384891 main.go:141] libmachine: (addons-246818) <domain type='kvm'>
	I1007 11:31:35.128859  384891 main.go:141] libmachine: (addons-246818)   <name>addons-246818</name>
	I1007 11:31:35.128867  384891 main.go:141] libmachine: (addons-246818)   <memory unit='MiB'>4000</memory>
	I1007 11:31:35.128910  384891 main.go:141] libmachine: (addons-246818)   <vcpu>2</vcpu>
	I1007 11:31:35.128933  384891 main.go:141] libmachine: (addons-246818)   <features>
	I1007 11:31:35.128941  384891 main.go:141] libmachine: (addons-246818)     <acpi/>
	I1007 11:31:35.128948  384891 main.go:141] libmachine: (addons-246818)     <apic/>
	I1007 11:31:35.128969  384891 main.go:141] libmachine: (addons-246818)     <pae/>
	I1007 11:31:35.128980  384891 main.go:141] libmachine: (addons-246818)     
	I1007 11:31:35.128988  384891 main.go:141] libmachine: (addons-246818)   </features>
	I1007 11:31:35.128998  384891 main.go:141] libmachine: (addons-246818)   <cpu mode='host-passthrough'>
	I1007 11:31:35.129006  384891 main.go:141] libmachine: (addons-246818)   
	I1007 11:31:35.129016  384891 main.go:141] libmachine: (addons-246818)   </cpu>
	I1007 11:31:35.129046  384891 main.go:141] libmachine: (addons-246818)   <os>
	I1007 11:31:35.129077  384891 main.go:141] libmachine: (addons-246818)     <type>hvm</type>
	I1007 11:31:35.129084  384891 main.go:141] libmachine: (addons-246818)     <boot dev='cdrom'/>
	I1007 11:31:35.129095  384891 main.go:141] libmachine: (addons-246818)     <boot dev='hd'/>
	I1007 11:31:35.129107  384891 main.go:141] libmachine: (addons-246818)     <bootmenu enable='no'/>
	I1007 11:31:35.129117  384891 main.go:141] libmachine: (addons-246818)   </os>
	I1007 11:31:35.129125  384891 main.go:141] libmachine: (addons-246818)   <devices>
	I1007 11:31:35.129140  384891 main.go:141] libmachine: (addons-246818)     <disk type='file' device='cdrom'>
	I1007 11:31:35.129155  384891 main.go:141] libmachine: (addons-246818)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/boot2docker.iso'/>
	I1007 11:31:35.129167  384891 main.go:141] libmachine: (addons-246818)       <target dev='hdc' bus='scsi'/>
	I1007 11:31:35.129174  384891 main.go:141] libmachine: (addons-246818)       <readonly/>
	I1007 11:31:35.129180  384891 main.go:141] libmachine: (addons-246818)     </disk>
	I1007 11:31:35.129194  384891 main.go:141] libmachine: (addons-246818)     <disk type='file' device='disk'>
	I1007 11:31:35.129223  384891 main.go:141] libmachine: (addons-246818)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 11:31:35.129239  384891 main.go:141] libmachine: (addons-246818)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/addons-246818.rawdisk'/>
	I1007 11:31:35.129249  384891 main.go:141] libmachine: (addons-246818)       <target dev='hda' bus='virtio'/>
	I1007 11:31:35.129258  384891 main.go:141] libmachine: (addons-246818)     </disk>
	I1007 11:31:35.129263  384891 main.go:141] libmachine: (addons-246818)     <interface type='network'>
	I1007 11:31:35.129278  384891 main.go:141] libmachine: (addons-246818)       <source network='mk-addons-246818'/>
	I1007 11:31:35.129290  384891 main.go:141] libmachine: (addons-246818)       <model type='virtio'/>
	I1007 11:31:35.129301  384891 main.go:141] libmachine: (addons-246818)     </interface>
	I1007 11:31:35.129312  384891 main.go:141] libmachine: (addons-246818)     <interface type='network'>
	I1007 11:31:35.129322  384891 main.go:141] libmachine: (addons-246818)       <source network='default'/>
	I1007 11:31:35.129335  384891 main.go:141] libmachine: (addons-246818)       <model type='virtio'/>
	I1007 11:31:35.129345  384891 main.go:141] libmachine: (addons-246818)     </interface>
	I1007 11:31:35.129351  384891 main.go:141] libmachine: (addons-246818)     <serial type='pty'>
	I1007 11:31:35.129363  384891 main.go:141] libmachine: (addons-246818)       <target port='0'/>
	I1007 11:31:35.129375  384891 main.go:141] libmachine: (addons-246818)     </serial>
	I1007 11:31:35.129385  384891 main.go:141] libmachine: (addons-246818)     <console type='pty'>
	I1007 11:31:35.129392  384891 main.go:141] libmachine: (addons-246818)       <target type='serial' port='0'/>
	I1007 11:31:35.129398  384891 main.go:141] libmachine: (addons-246818)     </console>
	I1007 11:31:35.129404  384891 main.go:141] libmachine: (addons-246818)     <rng model='virtio'>
	I1007 11:31:35.129410  384891 main.go:141] libmachine: (addons-246818)       <backend model='random'>/dev/random</backend>
	I1007 11:31:35.129416  384891 main.go:141] libmachine: (addons-246818)     </rng>
	I1007 11:31:35.129420  384891 main.go:141] libmachine: (addons-246818)     
	I1007 11:31:35.129426  384891 main.go:141] libmachine: (addons-246818)     
	I1007 11:31:35.129431  384891 main.go:141] libmachine: (addons-246818)   </devices>
	I1007 11:31:35.129437  384891 main.go:141] libmachine: (addons-246818) </domain>
	I1007 11:31:35.129452  384891 main.go:141] libmachine: (addons-246818) 
	I1007 11:31:35.136045  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:59:de:27 in network default
	I1007 11:31:35.136621  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:35.136638  384891 main.go:141] libmachine: (addons-246818) Ensuring networks are active...
	I1007 11:31:35.137397  384891 main.go:141] libmachine: (addons-246818) Ensuring network default is active
	I1007 11:31:35.137759  384891 main.go:141] libmachine: (addons-246818) Ensuring network mk-addons-246818 is active
	I1007 11:31:35.139309  384891 main.go:141] libmachine: (addons-246818) Getting domain xml...
	I1007 11:31:35.140007  384891 main.go:141] libmachine: (addons-246818) Creating domain...
	I1007 11:31:36.562781  384891 main.go:141] libmachine: (addons-246818) Waiting to get IP...
	I1007 11:31:36.563649  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:36.564039  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:36.564102  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:36.564034  384913 retry.go:31] will retry after 196.803567ms: waiting for machine to come up
	I1007 11:31:36.762559  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:36.762980  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:36.763006  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:36.762928  384913 retry.go:31] will retry after 309.609813ms: waiting for machine to come up
	I1007 11:31:37.074568  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:37.075066  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:37.075099  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:37.075019  384913 retry.go:31] will retry after 357.050229ms: waiting for machine to come up
	I1007 11:31:37.433468  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:37.433865  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:37.433888  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:37.433824  384913 retry.go:31] will retry after 404.967007ms: waiting for machine to come up
	I1007 11:31:37.840487  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:37.840912  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:37.840944  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:37.840852  384913 retry.go:31] will retry after 505.430509ms: waiting for machine to come up
	I1007 11:31:38.347450  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:38.347839  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:38.347868  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:38.347768  384913 retry.go:31] will retry after 847.255626ms: waiting for machine to come up
	I1007 11:31:39.196471  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:39.196947  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:39.196980  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:39.196886  384913 retry.go:31] will retry after 920.58458ms: waiting for machine to come up
	I1007 11:31:40.119476  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:40.119814  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:40.119836  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:40.119790  384913 retry.go:31] will retry after 948.651988ms: waiting for machine to come up
	I1007 11:31:41.070215  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:41.070708  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:41.070731  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:41.070668  384913 retry.go:31] will retry after 1.382953489s: waiting for machine to come up
	I1007 11:31:42.455452  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:42.455916  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:42.455941  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:42.455847  384913 retry.go:31] will retry after 2.262578278s: waiting for machine to come up
	I1007 11:31:44.719656  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:44.720338  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:44.720368  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:44.720277  384913 retry.go:31] will retry after 2.289996901s: waiting for machine to come up
	I1007 11:31:47.012350  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:47.012859  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:47.012889  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:47.012809  384913 retry.go:31] will retry after 3.343133276s: waiting for machine to come up
	I1007 11:31:50.358204  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:50.358539  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:50.358566  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:50.358487  384913 retry.go:31] will retry after 4.335427182s: waiting for machine to come up
	I1007 11:31:54.695193  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:54.695591  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:54.695617  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:54.695544  384913 retry.go:31] will retry after 3.558303483s: waiting for machine to come up
	I1007 11:31:58.258305  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.258838  384891 main.go:141] libmachine: (addons-246818) Found IP for machine: 192.168.39.141
	I1007 11:31:58.258873  384891 main.go:141] libmachine: (addons-246818) Reserving static IP address...
	I1007 11:31:58.258887  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has current primary IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.259281  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find host DHCP lease matching {name: "addons-246818", mac: "52:54:00:b1:d7:db", ip: "192.168.39.141"} in network mk-addons-246818
	I1007 11:31:58.385299  384891 main.go:141] libmachine: (addons-246818) Reserved static IP address: 192.168.39.141
	I1007 11:31:58.385331  384891 main.go:141] libmachine: (addons-246818) DBG | Getting to WaitForSSH function...
	I1007 11:31:58.385340  384891 main.go:141] libmachine: (addons-246818) Waiting for SSH to be available...
	I1007 11:31:58.387663  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.388108  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.388140  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.388409  384891 main.go:141] libmachine: (addons-246818) DBG | Using SSH client type: external
	I1007 11:31:58.388428  384891 main.go:141] libmachine: (addons-246818) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa (-rw-------)
	I1007 11:31:58.388460  384891 main.go:141] libmachine: (addons-246818) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 11:31:58.388472  384891 main.go:141] libmachine: (addons-246818) DBG | About to run SSH command:
	I1007 11:31:58.388485  384891 main.go:141] libmachine: (addons-246818) DBG | exit 0
	I1007 11:31:58.523637  384891 main.go:141] libmachine: (addons-246818) DBG | SSH cmd err, output: <nil>: 
	I1007 11:31:58.523957  384891 main.go:141] libmachine: (addons-246818) KVM machine creation complete!
	I1007 11:31:58.524322  384891 main.go:141] libmachine: (addons-246818) Calling .GetConfigRaw
	I1007 11:31:58.524995  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:58.525265  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:58.525453  384891 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 11:31:58.525471  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:31:58.526983  384891 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 11:31:58.527001  384891 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 11:31:58.527007  384891 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 11:31:58.527013  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.529966  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.530364  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.530392  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.530622  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.530830  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.531010  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.531238  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.531430  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.531658  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.531672  384891 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 11:31:58.638640  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:31:58.638671  384891 main.go:141] libmachine: Detecting the provisioner...
	I1007 11:31:58.638699  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.641499  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.641868  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.641902  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.642074  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.642323  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.642499  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.642641  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.642833  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.643029  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.643040  384891 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 11:31:58.752146  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 11:31:58.752213  384891 main.go:141] libmachine: found compatible host: buildroot
	I1007 11:31:58.752223  384891 main.go:141] libmachine: Provisioning with buildroot...
	I1007 11:31:58.752233  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:58.752488  384891 buildroot.go:166] provisioning hostname "addons-246818"
	I1007 11:31:58.752521  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:58.752755  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.755321  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.755658  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.755689  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.755781  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.755930  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.756116  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.756273  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.756441  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.756677  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.756693  384891 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-246818 && echo "addons-246818" | sudo tee /etc/hostname
	I1007 11:31:58.878487  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-246818
	
	I1007 11:31:58.878522  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.881235  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.881595  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.881628  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.881829  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.882043  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.882221  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.882373  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.882547  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.882736  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.882751  384891 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-246818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-246818/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-246818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:31:59.000758  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:31:59.000793  384891 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 11:31:59.000860  384891 buildroot.go:174] setting up certificates
	I1007 11:31:59.000882  384891 provision.go:84] configureAuth start
	I1007 11:31:59.000901  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:59.001290  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:31:59.004173  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.004729  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.004770  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.005018  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.007634  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.007984  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.008012  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.008236  384891 provision.go:143] copyHostCerts
	I1007 11:31:59.008313  384891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 11:31:59.008444  384891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 11:31:59.008531  384891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 11:31:59.008592  384891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.addons-246818 san=[127.0.0.1 192.168.39.141 addons-246818 localhost minikube]
	I1007 11:31:59.251829  384891 provision.go:177] copyRemoteCerts
	I1007 11:31:59.251901  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:31:59.251926  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.255073  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.255515  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.255554  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.255695  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.255927  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.256090  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.256229  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.342524  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 11:31:59.367975  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 11:31:59.393410  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 11:31:59.418593  384891 provision.go:87] duration metric: took 417.693053ms to configureAuth
	I1007 11:31:59.418624  384891 buildroot.go:189] setting minikube options for container-runtime
	I1007 11:31:59.418838  384891 config.go:182] Loaded profile config "addons-246818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:31:59.418935  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.421597  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.421932  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.421960  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.422111  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.422335  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.422530  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.422645  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.422799  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:59.423008  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:59.423028  384891 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:31:59.655212  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:31:59.655259  384891 main.go:141] libmachine: Checking connection to Docker...
	I1007 11:31:59.655271  384891 main.go:141] libmachine: (addons-246818) Calling .GetURL
	I1007 11:31:59.656909  384891 main.go:141] libmachine: (addons-246818) DBG | Using libvirt version 6000000
	I1007 11:31:59.659411  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.659775  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.659810  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.659963  384891 main.go:141] libmachine: Docker is up and running!
	I1007 11:31:59.659972  384891 main.go:141] libmachine: Reticulating splines...
	I1007 11:31:59.659979  384891 client.go:171] duration metric: took 25.446899659s to LocalClient.Create
	I1007 11:31:59.660003  384891 start.go:167] duration metric: took 25.446975437s to libmachine.API.Create "addons-246818"
	I1007 11:31:59.660014  384891 start.go:293] postStartSetup for "addons-246818" (driver="kvm2")
	I1007 11:31:59.660024  384891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:31:59.660041  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.660313  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:31:59.660341  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.662645  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.663064  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.663113  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.663225  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.663412  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.663549  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.663695  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.746681  384891 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:31:59.750995  384891 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 11:31:59.751029  384891 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 11:31:59.751132  384891 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 11:31:59.751171  384891 start.go:296] duration metric: took 91.150102ms for postStartSetup
	I1007 11:31:59.751218  384891 main.go:141] libmachine: (addons-246818) Calling .GetConfigRaw
	I1007 11:31:59.751830  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:31:59.754353  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.754726  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.754752  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.754998  384891 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/config.json ...
	I1007 11:31:59.755218  384891 start.go:128] duration metric: took 25.563019291s to createHost
	I1007 11:31:59.755244  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.757372  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.757682  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.757708  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.757833  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.757994  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.758133  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.758316  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.758481  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:59.758651  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:59.758660  384891 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 11:31:59.868422  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728300719.835078686
	
	I1007 11:31:59.868449  384891 fix.go:216] guest clock: 1728300719.835078686
	I1007 11:31:59.868459  384891 fix.go:229] Guest: 2024-10-07 11:31:59.835078686 +0000 UTC Remote: 2024-10-07 11:31:59.755232069 +0000 UTC m=+25.679693573 (delta=79.846617ms)
	I1007 11:31:59.868533  384891 fix.go:200] guest clock delta is within tolerance: 79.846617ms
	I1007 11:31:59.868543  384891 start.go:83] releasing machines lock for "addons-246818", held for 25.676492095s
	I1007 11:31:59.868570  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.868898  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:31:59.871581  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.871955  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.871981  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.872222  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.872811  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.872983  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.873091  384891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:31:59.873149  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.873159  384891 ssh_runner.go:195] Run: cat /version.json
	I1007 11:31:59.873181  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.875672  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.875703  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.876005  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.876042  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.876063  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.876076  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.876200  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.876338  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.876412  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.876507  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.876572  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.876743  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.876780  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.876890  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.978691  384891 ssh_runner.go:195] Run: systemctl --version
	I1007 11:31:59.985018  384891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:32:00.152322  384891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 11:32:00.158492  384891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 11:32:00.158593  384891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:32:00.176990  384891 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 11:32:00.177022  384891 start.go:495] detecting cgroup driver to use...
	I1007 11:32:00.177109  384891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:32:00.195687  384891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:32:00.211978  384891 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:32:00.212058  384891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:32:00.227604  384891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:32:00.242144  384891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:32:00.366315  384891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:32:00.526683  384891 docker.go:233] disabling docker service ...
	I1007 11:32:00.526776  384891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:32:00.541214  384891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:32:00.554981  384891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:32:00.685283  384891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:32:00.806166  384891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:32:00.821760  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:32:00.840995  384891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:32:00.841077  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.852364  384891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:32:00.852452  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.863984  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.875862  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.887376  384891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:32:00.899170  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.910698  384891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.928710  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.939899  384891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:32:00.950399  384891 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 11:32:00.950497  384891 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 11:32:00.964507  384891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:32:00.975096  384891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:32:01.103400  384891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:32:01.206446  384891 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:32:01.206551  384891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:32:01.212082  384891 start.go:563] Will wait 60s for crictl version
	I1007 11:32:01.212179  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:32:01.216568  384891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:32:01.255513  384891 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 11:32:01.255616  384891 ssh_runner.go:195] Run: crio --version
	I1007 11:32:01.285883  384891 ssh_runner.go:195] Run: crio --version
	I1007 11:32:01.318274  384891 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 11:32:01.319603  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:32:01.322312  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:01.322607  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:01.322642  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:01.322882  384891 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 11:32:01.328032  384891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:32:01.342592  384891 kubeadm.go:883] updating cluster {Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:32:01.342753  384891 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:32:01.342813  384891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:32:01.385519  384891 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 11:32:01.385605  384891 ssh_runner.go:195] Run: which lz4
	I1007 11:32:01.389912  384891 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 11:32:01.394513  384891 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 11:32:01.394572  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 11:32:02.800302  384891 crio.go:462] duration metric: took 1.410419336s to copy over tarball
	I1007 11:32:02.800451  384891 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 11:32:04.995474  384891 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.194982184s)
	I1007 11:32:04.995507  384891 crio.go:469] duration metric: took 2.195153422s to extract the tarball
	I1007 11:32:04.995518  384891 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 11:32:05.034133  384891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:32:05.081714  384891 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:32:05.081748  384891 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:32:05.081759  384891 kubeadm.go:934] updating node { 192.168.39.141 8443 v1.31.1 crio true true} ...
	I1007 11:32:05.081919  384891 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-246818 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:32:05.082006  384891 ssh_runner.go:195] Run: crio config
	I1007 11:32:05.126986  384891 cni.go:84] Creating CNI manager for ""
	I1007 11:32:05.127017  384891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:32:05.127029  384891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:32:05.127055  384891 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-246818 NodeName:addons-246818 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:32:05.127205  384891 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-246818"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:32:05.127271  384891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:32:05.138343  384891 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:32:05.138419  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:32:05.148540  384891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 11:32:05.166067  384891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:32:05.184173  384891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1007 11:32:05.202127  384891 ssh_runner.go:195] Run: grep 192.168.39.141	control-plane.minikube.internal$ /etc/hosts
	I1007 11:32:05.206447  384891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:32:05.219733  384891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:32:05.356364  384891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:32:05.374398  384891 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818 for IP: 192.168.39.141
	I1007 11:32:05.374431  384891 certs.go:194] generating shared ca certs ...
	I1007 11:32:05.374455  384891 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.374717  384891 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 11:32:05.569743  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt ...
	I1007 11:32:05.569780  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt: {Name:mka635174f873364a1d996695969f11525f0aad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.570000  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key ...
	I1007 11:32:05.570016  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key: {Name:mkb9f08978b906a4a69bf40b3648846639990aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.570120  384891 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 11:32:05.641034  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt ...
	I1007 11:32:05.641069  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt: {Name:mk6c2e0cb0b3463b53d4a7b8eca27330e83cad52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.641265  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key ...
	I1007 11:32:05.641279  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key: {Name:mkbd00d408f92ed97628a06bd31d4a22a55f1116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.641384  384891 certs.go:256] generating profile certs ...
	I1007 11:32:05.641459  384891 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.key
	I1007 11:32:05.641475  384891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt with IP's: []
	I1007 11:32:05.718596  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt ...
	I1007 11:32:05.718631  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: {Name:mk54791d72c1dd37de668acfdf6ae9b6a18b6816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.718824  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.key ...
	I1007 11:32:05.718838  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.key: {Name:mkc39919855b7ef97968b46dce56ec908abc99e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.718952  384891 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102
	I1007 11:32:05.719011  384891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141]
	I1007 11:32:05.819688  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102 ...
	I1007 11:32:05.819722  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102: {Name:mkfaee04775ee1012712d288fadcabaf991b49f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.819920  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102 ...
	I1007 11:32:05.819938  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102: {Name:mkeee88413f174c6e33cb018157316e66b4b0927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.820040  384891 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt
	I1007 11:32:05.820118  384891 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key
	I1007 11:32:05.820163  384891 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key
	I1007 11:32:05.820181  384891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt with IP's: []
	I1007 11:32:05.968555  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt ...
	I1007 11:32:05.968602  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt: {Name:mk5df33635e69d6716681ea740275cc204f34bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.968800  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key ...
	I1007 11:32:05.968815  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key: {Name:mkf7d084582e160837c9ab4efc5b7bae6d92e36f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.969012  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 11:32:05.969068  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 11:32:05.969100  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:32:05.969125  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 11:32:05.969737  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:32:05.995982  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 11:32:06.021458  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:32:06.050024  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 11:32:06.079964  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 11:32:06.108572  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 11:32:06.135463  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:32:06.162035  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 11:32:06.186675  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:32:06.216268  384891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:32:06.234408  384891 ssh_runner.go:195] Run: openssl version
	I1007 11:32:06.240683  384891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:32:06.252555  384891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:32:06.257813  384891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:32:06.257897  384891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:32:06.264471  384891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:32:06.276095  384891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:32:06.280492  384891 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 11:32:06.280573  384891 kubeadm.go:392] StartCluster: {Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:32:06.280683  384891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:32:06.280788  384891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:32:06.325293  384891 cri.go:89] found id: ""
	I1007 11:32:06.325397  384891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 11:32:06.338096  384891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 11:32:06.348756  384891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 11:32:06.359237  384891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 11:32:06.359265  384891 kubeadm.go:157] found existing configuration files:
	
	I1007 11:32:06.359321  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 11:32:06.369410  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 11:32:06.369502  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 11:32:06.380168  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 11:32:06.390519  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 11:32:06.390589  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 11:32:06.401125  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 11:32:06.411429  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 11:32:06.411496  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 11:32:06.422449  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 11:32:06.432934  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 11:32:06.433018  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 11:32:06.444113  384891 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 11:32:06.499524  384891 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 11:32:06.499599  384891 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 11:32:06.604372  384891 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 11:32:06.604511  384891 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 11:32:06.604590  384891 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 11:32:06.621867  384891 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 11:32:06.753861  384891 out.go:235]   - Generating certificates and keys ...
	I1007 11:32:06.753997  384891 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 11:32:06.754108  384891 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 11:32:06.754241  384891 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 11:32:06.907525  384891 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 11:32:07.081367  384891 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 11:32:07.235517  384891 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 11:32:07.323576  384891 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 11:32:07.323734  384891 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-246818 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1007 11:32:07.484355  384891 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 11:32:07.484552  384891 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-246818 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1007 11:32:07.690609  384891 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 11:32:07.921485  384891 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 11:32:08.090512  384891 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 11:32:08.090799  384891 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 11:32:08.402148  384891 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 11:32:08.478195  384891 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 11:32:08.612503  384891 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 11:32:08.702731  384891 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 11:32:09.158663  384891 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 11:32:09.159440  384891 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 11:32:09.161819  384891 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 11:32:09.167042  384891 out.go:235]   - Booting up control plane ...
	I1007 11:32:09.167167  384891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 11:32:09.167249  384891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 11:32:09.167364  384891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 11:32:09.179881  384891 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 11:32:09.189965  384891 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 11:32:09.190035  384891 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 11:32:09.324400  384891 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 11:32:09.324529  384891 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 11:32:09.831332  384891 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.899298ms
	I1007 11:32:09.831474  384891 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 11:32:15.831159  384891 kubeadm.go:310] [api-check] The API server is healthy after 6.001731023s
	I1007 11:32:15.856870  384891 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 11:32:15.879662  384891 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 11:32:15.920548  384891 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 11:32:15.920789  384891 kubeadm.go:310] [mark-control-plane] Marking the node addons-246818 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 11:32:15.939440  384891 kubeadm.go:310] [bootstrap-token] Using token: bpaf5t.csjf2xhv6gacp46a
	I1007 11:32:15.940908  384891 out.go:235]   - Configuring RBAC rules ...
	I1007 11:32:15.941047  384891 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 11:32:15.948031  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 11:32:15.960728  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 11:32:15.964750  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 11:32:15.968808  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 11:32:15.973958  384891 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 11:32:16.238653  384891 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 11:32:16.679433  384891 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 11:32:17.237909  384891 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 11:32:17.237938  384891 kubeadm.go:310] 
	I1007 11:32:17.238007  384891 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 11:32:17.238014  384891 kubeadm.go:310] 
	I1007 11:32:17.238117  384891 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 11:32:17.238128  384891 kubeadm.go:310] 
	I1007 11:32:17.238155  384891 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 11:32:17.238231  384891 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 11:32:17.238300  384891 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 11:32:17.238310  384891 kubeadm.go:310] 
	I1007 11:32:17.238377  384891 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 11:32:17.238388  384891 kubeadm.go:310] 
	I1007 11:32:17.238446  384891 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 11:32:17.238488  384891 kubeadm.go:310] 
	I1007 11:32:17.238579  384891 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 11:32:17.238753  384891 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 11:32:17.238851  384891 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 11:32:17.238863  384891 kubeadm.go:310] 
	I1007 11:32:17.238995  384891 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 11:32:17.239104  384891 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 11:32:17.239114  384891 kubeadm.go:310] 
	I1007 11:32:17.239246  384891 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bpaf5t.csjf2xhv6gacp46a \
	I1007 11:32:17.239371  384891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 \
	I1007 11:32:17.239410  384891 kubeadm.go:310] 	--control-plane 
	I1007 11:32:17.239423  384891 kubeadm.go:310] 
	I1007 11:32:17.239519  384891 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 11:32:17.239531  384891 kubeadm.go:310] 
	I1007 11:32:17.239632  384891 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bpaf5t.csjf2xhv6gacp46a \
	I1007 11:32:17.239752  384891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 
	I1007 11:32:17.240386  384891 kubeadm.go:310] W1007 11:32:06.469101     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:32:17.240693  384891 kubeadm.go:310] W1007 11:32:06.469905     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:32:17.240786  384891 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 11:32:17.240815  384891 cni.go:84] Creating CNI manager for ""
	I1007 11:32:17.240824  384891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:32:17.242992  384891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 11:32:17.244570  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 11:32:17.255322  384891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 11:32:17.274225  384891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 11:32:17.274381  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:17.274395  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-246818 minikube.k8s.io/updated_at=2024_10_07T11_32_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=addons-246818 minikube.k8s.io/primary=true
	I1007 11:32:17.305991  384891 ops.go:34] apiserver oom_adj: -16
	I1007 11:32:17.433612  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:17.933706  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:18.434006  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:18.934513  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:19.434172  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:19.933925  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:20.434498  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:20.934340  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:21.035626  384891 kubeadm.go:1113] duration metric: took 3.76133711s to wait for elevateKubeSystemPrivileges
	I1007 11:32:21.035692  384891 kubeadm.go:394] duration metric: took 14.755128051s to StartCluster
	I1007 11:32:21.035722  384891 settings.go:142] acquiring lock: {Name:mk1ff033f29b570679652ae5ee30e0799b0658dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:21.035877  384891 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:32:21.036315  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:21.036557  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 11:32:21.036565  384891 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:32:21.036649  384891 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 11:32:21.036807  384891 addons.go:69] Setting storage-provisioner=true in profile "addons-246818"
	I1007 11:32:21.036827  384891 addons.go:69] Setting gcp-auth=true in profile "addons-246818"
	I1007 11:32:21.036828  384891 addons.go:69] Setting volcano=true in profile "addons-246818"
	I1007 11:32:21.036807  384891 addons.go:69] Setting inspektor-gadget=true in profile "addons-246818"
	I1007 11:32:21.036852  384891 addons.go:234] Setting addon inspektor-gadget=true in "addons-246818"
	I1007 11:32:21.036853  384891 addons.go:234] Setting addon volcano=true in "addons-246818"
	I1007 11:32:21.036849  384891 addons.go:69] Setting default-storageclass=true in profile "addons-246818"
	I1007 11:32:21.036869  384891 addons.go:69] Setting ingress-dns=true in profile "addons-246818"
	I1007 11:32:21.036879  384891 addons.go:234] Setting addon ingress-dns=true in "addons-246818"
	I1007 11:32:21.036892  384891 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-246818"
	I1007 11:32:21.036910  384891 addons.go:69] Setting metrics-server=true in profile "addons-246818"
	I1007 11:32:21.036924  384891 addons.go:69] Setting registry=true in profile "addons-246818"
	I1007 11:32:21.036927  384891 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-246818"
	I1007 11:32:21.036936  384891 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-246818"
	I1007 11:32:21.036940  384891 addons.go:69] Setting cloud-spanner=true in profile "addons-246818"
	I1007 11:32:21.036952  384891 addons.go:234] Setting addon cloud-spanner=true in "addons-246818"
	I1007 11:32:21.036961  384891 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-246818"
	I1007 11:32:21.036975  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036978  384891 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-246818"
	I1007 11:32:21.036993  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036861  384891 addons.go:69] Setting ingress=true in profile "addons-246818"
	I1007 11:32:21.037030  384891 addons.go:234] Setting addon ingress=true in "addons-246818"
	I1007 11:32:21.037061  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036928  384891 addons.go:234] Setting addon metrics-server=true in "addons-246818"
	I1007 11:32:21.037120  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.037350  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037366  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.036838  384891 addons.go:234] Setting addon storage-provisioner=true in "addons-246818"
	I1007 11:32:21.037391  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037400  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036999  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.037497  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037522  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037549  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037552  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037582  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037557  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037628  384891 addons.go:69] Setting yakd=true in profile "addons-246818"
	I1007 11:32:21.037646  384891 addons.go:234] Setting addon yakd=true in "addons-246818"
	I1007 11:32:21.037680  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036940  384891 addons.go:234] Setting addon registry=true in "addons-246818"
	I1007 11:32:21.037693  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037718  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.037722  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037828  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.036910  384891 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-246818"
	I1007 11:32:21.037863  384891 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-246818"
	I1007 11:32:21.037867  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037869  384891 config.go:182] Loaded profile config "addons-246818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:32:21.036900  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.038071  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.038102  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.036900  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036915  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.038396  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.038456  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.036853  384891 mustload.go:65] Loading cluster: addons-246818
	I1007 11:32:21.037607  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.036926  384891 addons.go:69] Setting volumesnapshots=true in profile "addons-246818"
	I1007 11:32:21.038612  384891 addons.go:234] Setting addon volumesnapshots=true in "addons-246818"
	I1007 11:32:21.038845  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.038991  384891 config.go:182] Loaded profile config "addons-246818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:32:21.039002  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.039392  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.039450  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.038918  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.039508  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.038917  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.039622  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.038947  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.038892  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.040135  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.043624  384891 out.go:177] * Verifying Kubernetes components...
	I1007 11:32:21.045277  384891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:32:21.059674  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40459
	I1007 11:32:21.059886  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34353
	I1007 11:32:21.060116  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I1007 11:32:21.060236  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34293
	I1007 11:32:21.060237  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.060363  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.060626  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.060914  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.060941  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.061120  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.061149  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.061246  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.061270  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.061308  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.061479  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.061589  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.061687  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.061936  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I1007 11:32:21.062180  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.062193  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.062201  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.062216  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.062230  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.062656  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.062682  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.062857  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.063038  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.079607  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.079643  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.079880  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.079926  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.080116  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.080148  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.080156  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I1007 11:32:21.080301  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I1007 11:32:21.080981  384891 addons.go:234] Setting addon default-storageclass=true in "addons-246818"
	I1007 11:32:21.081031  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.081396  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.081445  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.081570  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.081657  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.081692  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.082569  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.082591  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.082721  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.082731  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.082825  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.082859  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.083559  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.083625  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.084318  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.084370  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.095528  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I1007 11:32:21.097818  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I1007 11:32:21.098201  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.098902  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.098927  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.099603  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.100289  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.100343  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.100410  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I1007 11:32:21.100514  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42121
	I1007 11:32:21.100846  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.101205  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.101253  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.101833  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.101860  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.101981  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.102007  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.102113  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.102128  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.102370  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.102568  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.102933  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.102979  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.103022  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.103397  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.103433  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.103660  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.103694  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.113877  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I1007 11:32:21.114643  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.115420  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.115457  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.115864  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.116171  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.120249  384891 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-246818"
	I1007 11:32:21.120318  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.120889  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.120968  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.122908  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42529
	I1007 11:32:21.123632  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.123722  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.123949  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.124128  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43957
	I1007 11:32:21.124615  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.125161  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.125181  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.125325  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.125337  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.125531  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36931
	I1007 11:32:21.125965  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.126199  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.126337  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.126554  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.127633  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.128389  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.128408  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.128475  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.129155  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.129312  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I1007 11:32:21.129767  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42759
	I1007 11:32:21.130331  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.130464  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.131079  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.131105  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.131107  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.131163  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.131263  384891 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 11:32:21.131344  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 11:32:21.131653  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.131733  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I1007 11:32:21.131896  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.132323  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.132906  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 11:32:21.132924  384891 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 11:32:21.132947  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.133027  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.133041  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.133528  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.133751  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.134899  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:32:21.135060  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.136912  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.137373  384891 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 11:32:21.138188  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.138641  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.138667  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.139051  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.139278  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.139296  384891 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:32:21.139317  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 11:32:21.139349  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.139409  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.139420  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:32:21.139532  384891 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 11:32:21.140022  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.140246  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.141237  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 11:32:21.141257  384891 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 11:32:21.141282  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.141668  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.141695  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.141761  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1007 11:32:21.142266  384891 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:32:21.142440  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 11:32:21.142466  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.144235  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1007 11:32:21.145460  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.145517  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.145588  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.146385  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.146417  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.146860  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.146879  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.147046  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.147059  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.147114  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.147158  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.147367  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.147399  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.147622  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.147702  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.147719  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.147904  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.147959  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.148109  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.148421  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.148482  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1007 11:32:21.148649  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.148707  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I1007 11:32:21.148836  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.149316  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.149355  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.149633  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.149739  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.149828  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.150158  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.150216  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.150473  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.150757  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.150905  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.150919  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.151003  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:21.151012  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:21.154104  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.154210  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.154235  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.154317  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.154383  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.154396  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.154417  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:21.154428  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.154441  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:21.154447  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:21.154455  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:21.154462  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:21.154491  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.154529  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.154555  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43077
	I1007 11:32:21.154584  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.154625  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.154653  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I1007 11:32:21.154704  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:21.154725  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:21.154732  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:21.154758  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	W1007 11:32:21.154823  384891 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 11:32:21.155361  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.155377  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.155408  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.155410  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.156096  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.156098  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.156159  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.156308  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.156328  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.156406  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44697
	I1007 11:32:21.156880  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.156968  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.157016  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.157057  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.157424  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.157456  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.158097  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.158115  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.158531  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.158741  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.159645  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.161490  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.162042  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.162115  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.163859  384891 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 11:32:21.163880  384891 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 11:32:21.163859  384891 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 11:32:21.165361  384891 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 11:32:21.165385  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 11:32:21.165391  384891 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:32:21.165409  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 11:32:21.165411  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.165429  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.166616  384891 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 11:32:21.167980  384891 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 11:32:21.167999  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 11:32:21.168025  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.170468  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.171175  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.171703  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.171726  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.171772  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.172008  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.172069  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.172087  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.172117  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.172343  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.172387  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.172430  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.172550  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.172611  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.172790  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.172809  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.173186  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.173368  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.173431  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.173849  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.174000  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.178470  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I1007 11:32:21.178919  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.179445  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I1007 11:32:21.179523  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.179546  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.179982  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.180089  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.180539  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.180594  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.180597  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I1007 11:32:21.180610  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.180961  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.181131  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.181387  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.181501  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35233
	I1007 11:32:21.181867  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.181944  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.181962  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.182396  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.182521  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.182535  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.182653  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.182767  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.183119  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.183140  384891 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 11:32:21.183154  384891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 11:32:21.183180  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.183341  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.185163  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.186316  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.187476  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.188077  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.188103  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.188214  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 11:32:21.188299  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.188343  384891 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 11:32:21.188505  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.188541  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I1007 11:32:21.188671  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.188708  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1007 11:32:21.188930  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.188981  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.189347  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.189515  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.189531  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.189865  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 11:32:21.189883  384891 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 11:32:21.189902  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.189865  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.190077  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.190097  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.190187  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.190696  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.190711  384891 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 11:32:21.190734  384891 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 11:32:21.190756  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.191383  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.194537  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.194635  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.195445  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.195483  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.195505  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.195967  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I1007 11:32:21.196198  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.196207  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.196231  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.196419  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.196513  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.196561  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.196559  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.196717  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.196754  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.196824  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.196885  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.197100  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.197145  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.197116  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.197531  384891 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 11:32:21.197717  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.198163  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.198321  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 11:32:21.199810  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.199881  384891 out.go:177]   - Using image docker.io/busybox:stable
	I1007 11:32:21.199889  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 11:32:21.201263  384891 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 11:32:21.202581  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 11:32:21.202672  384891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:32:21.202687  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 11:32:21.202707  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.203143  384891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:32:21.203162  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 11:32:21.203188  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.205432  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 11:32:21.206350  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.206434  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.206694  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.206752  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.206778  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.206783  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.207047  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.207116  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.207206  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.207253  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.207304  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.207347  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.207390  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.207667  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.208112  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 11:32:21.209535  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	W1007 11:32:21.210345  384891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50694->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.210375  384891 retry.go:31] will retry after 169.209619ms: ssh: handshake failed: read tcp 192.168.39.1:50694->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.212576  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 11:32:21.213890  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 11:32:21.214984  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 11:32:21.215006  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 11:32:21.215033  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.218251  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.218699  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.218755  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.218955  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.219220  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.219366  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.219512  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	W1007 11:32:21.380838  384891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50722->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.380877  384891 retry.go:31] will retry after 486.807101ms: ssh: handshake failed: read tcp 192.168.39.1:50722->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.569888  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:32:21.662408  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:32:21.671323  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 11:32:21.671359  384891 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 11:32:21.677079  384891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 11:32:21.677113  384891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 11:32:21.717464  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 11:32:21.717508  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 11:32:21.721131  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 11:32:21.721162  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 11:32:21.726314  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:32:21.738766  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 11:32:21.751504  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:32:21.781874  384891 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 11:32:21.781907  384891 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 11:32:21.814479  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 11:32:21.824071  384891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:32:21.824369  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 11:32:21.836461  384891 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 11:32:21.836512  384891 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 11:32:21.850533  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 11:32:21.850563  384891 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 11:32:21.901980  384891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 11:32:21.902023  384891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 11:32:21.930371  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 11:32:21.930410  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 11:32:21.939212  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 11:32:21.939255  384891 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 11:32:21.953019  384891 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:32:21.953053  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 11:32:22.048099  384891 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 11:32:22.048134  384891 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 11:32:22.121023  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 11:32:22.121067  384891 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 11:32:22.190982  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:32:22.200335  384891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 11:32:22.200368  384891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 11:32:22.226689  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 11:32:22.226728  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 11:32:22.254471  384891 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 11:32:22.254515  384891 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 11:32:22.284154  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:32:22.284192  384891 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 11:32:22.355775  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:32:22.355802  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 11:32:22.460686  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 11:32:22.460719  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 11:32:22.471081  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 11:32:22.471115  384891 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 11:32:22.474890  384891 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 11:32:22.474914  384891 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 11:32:22.505581  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:32:22.509236  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:32:22.540551  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:32:22.706336  384891 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:32:22.706365  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 11:32:22.757067  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 11:32:22.757099  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 11:32:22.851444  384891 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 11:32:22.851479  384891 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 11:32:22.979312  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:32:23.037624  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 11:32:23.037665  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 11:32:23.181268  384891 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 11:32:23.181304  384891 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 11:32:23.329836  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 11:32:23.329871  384891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 11:32:23.422160  384891 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 11:32:23.422204  384891 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 11:32:23.701377  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 11:32:23.701416  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 11:32:23.717985  384891 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:32:23.718012  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 11:32:23.962990  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 11:32:23.963023  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 11:32:24.062714  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:32:24.267101  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:32:24.267134  384891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 11:32:24.488660  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:32:28.211807  384891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 11:32:28.211865  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:28.215550  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:28.216113  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:28.216153  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:28.216343  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:28.216613  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:28.216834  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:28.217015  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:28.781684  384891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 11:32:29.027350  384891 addons.go:234] Setting addon gcp-auth=true in "addons-246818"
	I1007 11:32:29.027409  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:29.027725  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:29.027785  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:29.045375  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45379
	I1007 11:32:29.046015  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:29.046676  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:29.046709  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:29.047110  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:29.047622  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:29.047675  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:29.064290  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I1007 11:32:29.064871  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:29.065411  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:29.065438  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:29.065798  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:29.066019  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:29.068256  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:29.068576  384891 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 11:32:29.068609  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:29.071318  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:29.071806  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:29.071836  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:29.072091  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:29.072359  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:29.072612  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:29.072814  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:30.065708  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.403252117s)
	I1007 11:32:30.065784  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065796  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065811  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.33946418s)
	I1007 11:32:30.065857  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065865  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.495938324s)
	I1007 11:32:30.065881  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065898  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.327105535s)
	I1007 11:32:30.065926  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065900  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065941  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065947  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065956  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.314410411s)
	I1007 11:32:30.066001  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066014  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066107  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.251596479s)
	I1007 11:32:30.066132  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066140  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066201  384891 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.242099217s)
	I1007 11:32:30.066343  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.066347  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.066368  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.066367  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.066377  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066385  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066443  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.066444  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.066450  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.066458  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066464  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066496  384891 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.242103231s)
	I1007 11:32:30.066525  384891 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 11:32:30.066633  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.875604078s)
	I1007 11:32:30.066671  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066686  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066701  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.066711  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.066719  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066726  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066812  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.56119506s)
	I1007 11:32:30.066833  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066844  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066928  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.557663248s)
	I1007 11:32:30.066946  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066954  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067053  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067070  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.526488091s)
	I1007 11:32:30.067077  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067083  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067087  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067090  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067097  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067099  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067273  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.087920249s)
	W1007 11:32:30.067306  384891 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:32:30.067334  384891 retry.go:31] will retry after 318.73232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:32:30.067431  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.004678888s)
	I1007 11:32:30.067452  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067472  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067555  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067585  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067595  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067604  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067610  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067660  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067681  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067687  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067878  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067912  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067919  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067926  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067932  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.070203  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.070251  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.070258  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.070269  384891 addons.go:475] Verifying addon ingress=true in "addons-246818"
	I1007 11:32:30.070513  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.070568  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.070582  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.071060  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.071101  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.071110  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.071123  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.071132  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.071872  384891 out.go:177] * Verifying ingress addon...
	I1007 11:32:30.072804  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.072826  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072856  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.072870  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.072262  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072292  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.072969  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072327  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072351  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.072993  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073063  384891 node_ready.go:35] waiting up to 6m0s for node "addons-246818" to be "Ready" ...
	I1007 11:32:30.073157  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073172  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072402  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072428  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073301  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072444  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072472  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073375  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073383  384891 addons.go:475] Verifying addon registry=true in "addons-246818"
	I1007 11:32:30.072519  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072542  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073455  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073743  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.073754  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.072602  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072689  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072713  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073830  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073838  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.073844  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.072981  384891 addons.go:475] Verifying addon metrics-server=true in "addons-246818"
	I1007 11:32:30.072586  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073928  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073935  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.073941  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.074316  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.074355  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.074361  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.074555  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.074692  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.074699  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.074712  384891 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-246818 service yakd-dashboard -n yakd-dashboard
	
	I1007 11:32:30.074754  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.074782  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.074788  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.075159  384891 out.go:177] * Verifying registry addon...
	I1007 11:32:30.077150  384891 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 11:32:30.077593  384891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 11:32:30.087836  384891 node_ready.go:49] node "addons-246818" has status "Ready":"True"
	I1007 11:32:30.087865  384891 node_ready.go:38] duration metric: took 14.756038ms for node "addons-246818" to be "Ready" ...
	I1007 11:32:30.087879  384891 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:32:30.092003  384891 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 11:32:30.092039  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:30.095848  384891 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 11:32:30.095879  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:30.110889  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.110919  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.111265  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.111273  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.111288  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	W1007 11:32:30.111382  384891 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1007 11:32:30.120282  384891 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9n6rn" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.121748  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.121764  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.122055  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.122109  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.122125  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.155261  384891 pod_ready.go:93] pod "coredns-7c65d6cfc9-9n6rn" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.155289  384891 pod_ready.go:82] duration metric: took 34.974077ms for pod "coredns-7c65d6cfc9-9n6rn" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.155302  384891 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dzpc8" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.178588  384891 pod_ready.go:93] pod "coredns-7c65d6cfc9-dzpc8" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.178617  384891 pod_ready.go:82] duration metric: took 23.305528ms for pod "coredns-7c65d6cfc9-dzpc8" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.178629  384891 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.223158  384891 pod_ready.go:93] pod "etcd-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.223187  384891 pod_ready.go:82] duration metric: took 44.549581ms for pod "etcd-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.223197  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.253914  384891 pod_ready.go:93] pod "kube-apiserver-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.253941  384891 pod_ready.go:82] duration metric: took 30.73707ms for pod "kube-apiserver-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.253954  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.386868  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:32:30.476890  384891 pod_ready.go:93] pod "kube-controller-manager-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.476938  384891 pod_ready.go:82] duration metric: took 222.974328ms for pod "kube-controller-manager-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.476959  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l8kql" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.571544  384891 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-246818" context rescaled to 1 replicas
	I1007 11:32:30.582503  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:30.582873  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:30.914008  384891 pod_ready.go:93] pod "kube-proxy-l8kql" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.914040  384891 pod_ready.go:82] duration metric: took 437.071606ms for pod "kube-proxy-l8kql" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.914052  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:31.084293  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:31.084904  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:31.277897  384891 pod_ready.go:93] pod "kube-scheduler-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:31.277934  384891 pod_ready.go:82] duration metric: took 363.871437ms for pod "kube-scheduler-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:31.277953  384891 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:31.587346  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:31.587502  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:32.188862  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:32.296683  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:32.466486  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.977770361s)
	I1007 11:32:32.466545  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.466560  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.466611  384891 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.39800642s)
	I1007 11:32:32.466755  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.0798406s)
	I1007 11:32:32.466832  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.466844  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.466862  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.466889  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:32.466906  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.466915  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.466922  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.467112  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.467127  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.467136  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.467143  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.467213  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:32.467225  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.467235  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.467250  384891 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-246818"
	I1007 11:32:32.467411  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:32.467414  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.467424  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.468956  384891 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 11:32:32.469005  384891 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 11:32:32.470557  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:32:32.471269  384891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 11:32:32.472164  384891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 11:32:32.472191  384891 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 11:32:32.502795  384891 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 11:32:32.502824  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:32.554269  384891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 11:32:32.554306  384891 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 11:32:32.588477  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:32.588751  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:32.633642  384891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:32:32.633670  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 11:32:32.817741  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:32:32.975678  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:33.085784  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:33.086499  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:33.284978  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:33.476686  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:33.582171  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:33.582790  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:33.982427  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:34.084906  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:34.085799  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:34.308214  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.490411942s)
	I1007 11:32:34.308309  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:34.308332  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:34.308649  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:34.308705  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:34.308723  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:34.308741  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:34.308752  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:34.309132  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:34.309186  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:34.309202  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:34.310559  384891 addons.go:475] Verifying addon gcp-auth=true in "addons-246818"
	I1007 11:32:34.312007  384891 out.go:177] * Verifying gcp-auth addon...
	I1007 11:32:34.314730  384891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 11:32:34.340586  384891 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 11:32:34.340612  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:34.475714  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:34.582546  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:34.583308  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:34.818688  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:34.976405  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:35.082601  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:35.084039  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:35.285036  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:35.318158  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:35.477972  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:35.583376  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:35.583561  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:35.819531  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:35.975590  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:36.082179  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:36.082337  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:36.319330  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:36.476751  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:36.582692  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:36.584000  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:37.005486  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:37.006535  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:37.083365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:37.083910  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:37.287981  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:37.319722  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:37.477822  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:37.581529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:37.582720  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:37.819884  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:37.976935  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:38.082033  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:38.082405  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:38.318841  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:38.475607  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:38.581655  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:38.582226  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:38.819241  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:38.976848  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:39.082867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:39.083274  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:39.290395  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:39.318648  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:39.476451  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:39.582171  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:39.582624  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:39.819410  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:39.977333  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:40.081612  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:40.082203  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:40.319145  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:40.476723  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:40.581603  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:40.583149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:40.818385  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:40.977851  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:41.083017  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:41.083342  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:41.317798  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:41.475982  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:41.582409  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:41.582455  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:41.786127  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:41.819529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:41.976946  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:42.082000  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:42.082192  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:42.318601  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:42.475545  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:42.582736  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:42.583438  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:42.818333  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:42.976980  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:43.083098  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:43.083595  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:43.318576  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:43.503845  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:43.582649  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:43.583155  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:43.818278  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:43.976805  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:44.082470  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:44.082807  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:44.284958  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:44.319223  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:44.476657  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:44.582711  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:44.583066  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:44.818827  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:44.976149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:45.082276  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:45.082484  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:45.318464  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:45.476894  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:45.610547  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:45.610833  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:45.975833  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:45.996872  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:46.082114  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:46.082777  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:46.317822  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:46.476436  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:46.582945  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:46.583120  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:46.784162  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:46.818445  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:46.976526  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:47.082671  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:47.082833  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:47.319655  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:47.476921  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:47.581622  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:47.582699  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:47.818529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:47.977011  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:48.084165  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:48.086044  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:48.319215  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:48.484879  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:48.582304  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:48.582986  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:48.818694  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:48.976728  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:49.081291  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:49.082282  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:49.283787  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:49.318639  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:49.476339  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:49.582576  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:49.582919  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:49.818304  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:49.976650  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:50.081972  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:50.083388  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:50.319189  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:50.476949  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:50.581903  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:50.582534  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:51.138429  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:51.138593  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:51.139224  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:51.139625  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:51.284853  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:51.319510  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:51.478092  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:51.582296  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:51.583977  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:51.821388  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:51.977408  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:52.082306  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:52.082725  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:52.320270  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:52.477071  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:52.581676  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:52.582004  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:52.819335  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:52.976826  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:53.081715  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:53.082217  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:53.286270  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:53.318565  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:53.476657  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:53.582416  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:53.582912  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:53.821038  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:53.976548  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:54.083018  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:54.083157  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:54.318909  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:54.480652  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:54.583081  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:54.583782  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:54.819006  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:54.976399  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:55.081741  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:55.082950  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:55.318290  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:55.477525  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:55.582408  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:55.582694  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:55.784044  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:55.819410  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:55.976273  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:56.081493  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:56.081873  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:56.319113  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:56.476767  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:56.582149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:56.582756  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:56.818865  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:56.977253  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:57.081925  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:57.082420  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:57.318929  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:57.785145  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:57.785322  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:57.785444  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:57.799701  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:57.875340  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:57.976458  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:58.082124  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:58.082502  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:58.318902  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:58.476352  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:58.583758  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:58.583953  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:58.817729  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:58.975913  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:59.084032  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:59.086065  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:59.346848  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:59.476648  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:59.582942  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:59.584115  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:59.821365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:59.986819  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:00.081462  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:00.083518  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:00.287257  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:00.320992  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:00.476599  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:00.583058  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:00.583512  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:00.818832  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:00.976928  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:01.082142  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:01.082422  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:01.320347  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:01.476916  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:01.581829  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:01.582058  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:01.824411  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:01.978086  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:02.082410  384891 kapi.go:107] duration metric: took 32.004807404s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 11:33:02.082721  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:02.318823  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:02.476149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:02.581365  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:02.785380  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:02.819435  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:02.981119  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:03.082298  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:03.318836  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:03.475816  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:03.581866  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:03.820271  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:03.977531  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:04.081370  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:04.318861  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:04.478185  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:04.581057  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:04.786095  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:04.818861  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:04.977359  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:05.081577  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:05.319021  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:05.476415  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:05.582041  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:05.817893  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:05.977602  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:06.081923  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:06.319212  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:06.477018  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:06.582023  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:06.818841  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:06.976129  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:07.082189  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:07.286377  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:07.319883  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:07.476167  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:07.582756  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:07.818624  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:07.977713  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:08.081834  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:08.319188  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:08.477158  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:08.582912  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:08.818256  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:08.976773  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:09.082355  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:09.319241  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:09.476152  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:09.581908  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:09.784186  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:09.817949  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:09.976974  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:10.082168  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:10.318356  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:10.477137  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:10.581246  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:10.819236  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:10.976625  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:11.082510  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:11.319088  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:11.475963  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:11.581311  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:11.785390  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:11.818393  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:11.977640  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:12.081174  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:12.319522  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:12.476944  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:12.582131  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:12.818446  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:12.976621  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:13.081988  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:13.318911  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:13.484798  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:13.582395  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:13.819383  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:13.977648  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:14.082158  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:14.285577  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:14.318713  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:14.475847  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:14.582159  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:14.818441  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:14.977209  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:15.081963  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:15.318737  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:15.476205  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:15.583061  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:15.819153  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:15.976561  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:16.081683  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:16.318410  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:16.476630  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:16.581615  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:16.784072  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:16.818076  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:16.977198  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:17.081611  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:17.320061  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:17.476515  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:17.581786  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:17.818618  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:17.976464  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:18.084173  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:18.318030  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:18.477107  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:18.586160  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:18.784408  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:18.818855  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:18.975975  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:19.083601  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:19.319129  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:19.476165  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:19.581505  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:19.818001  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:19.976718  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:20.082101  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:20.319192  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:20.476616  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:20.581717  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:20.785149  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:20.818020  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:20.976775  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:21.082210  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:21.318711  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:21.475778  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:21.582480  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:21.819356  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:21.977763  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:22.082225  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:22.318697  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:22.476177  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:22.582015  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:22.817984  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:22.976500  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:23.081605  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:23.284652  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:23.319106  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:23.476419  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:23.581621  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:23.818519  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:23.976857  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:24.082273  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:24.319210  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:24.476471  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:24.581691  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:24.818346  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:24.976944  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:25.082182  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:25.285349  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:25.319385  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:25.476777  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:25.582609  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:25.818485  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:25.977168  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:26.082176  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:26.318509  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:26.476390  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:26.581578  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:26.819122  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:26.976649  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:27.081846  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:27.285801  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:27.319965  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:27.476748  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:27.582786  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:27.820119  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:27.977567  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:28.081776  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:28.321486  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:28.476034  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:28.580919  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:28.818302  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:28.976750  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:29.082261  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:29.318773  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:29.476952  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:29.582302  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:29.784755  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:29.818641  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:29.975885  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:30.082754  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:30.318788  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:30.476267  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:30.581482  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:30.818790  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:30.976169  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:31.082040  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:31.318394  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:31.477328  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:31.581590  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:31.785001  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:31.818455  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:31.977285  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:32.082645  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:32.319761  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:32.475996  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:32.580957  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:32.818618  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:32.981189  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:33.082222  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:33.318499  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:33.477371  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:33.581430  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:33.819139  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:33.976629  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:34.348998  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:34.349111  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:34.354582  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:34.477183  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:34.582017  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:34.818854  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:34.975708  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:35.082682  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:35.318096  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:35.476479  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:35.581982  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:35.818348  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:35.976667  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:36.082093  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:36.319301  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:36.477260  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:36.581116  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:36.785438  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:36.818479  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:36.976498  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:37.081603  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:37.318719  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:37.476366  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:37.582055  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:37.818735  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:37.975866  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:38.081879  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:38.318601  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:38.484592  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:38.582279  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:38.818547  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:38.975841  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:39.081986  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:39.284349  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:39.317923  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:39.476365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:39.582175  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:39.818974  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:39.975890  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:40.082033  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:40.318628  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:40.518043  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:40.582189  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:40.819150  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:40.979733  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:41.081822  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:41.284675  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:41.318611  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:41.475350  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:41.581870  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:41.817872  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:41.975624  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:42.082150  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:42.319800  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:42.479033  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:42.583338  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:42.819134  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:42.978046  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:43.083708  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:43.318837  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:43.476705  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:43.582056  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:43.785109  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:43.818104  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:43.976109  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:44.081416  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:44.318991  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:44.476151  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:44.596289  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:44.819051  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:44.976616  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:45.081745  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:45.318842  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:45.476739  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:45.582727  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:45.817867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:45.976600  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:46.082267  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:46.288414  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:46.319714  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:46.476643  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:46.582493  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:46.818948  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:46.977533  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:47.082182  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:47.318238  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:47.476983  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:47.583066  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:47.819252  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:47.978774  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:48.082507  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:48.318486  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:48.476123  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:48.583163  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:48.784677  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:48.822387  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:48.986510  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:49.086137  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:49.323706  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:49.481895  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:49.582564  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:49.819675  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:49.976031  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:50.082594  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:50.319558  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:50.478668  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:50.588098  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:50.788097  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:50.844238  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:50.976971  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:51.083864  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:51.319080  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:51.476545  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:51.581625  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:51.820026  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:51.986619  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:52.092476  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:52.319404  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:52.480622  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:52.588382  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:52.818422  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:52.976771  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:53.082063  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:53.286041  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:53.318561  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:53.476866  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:53.584944  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:53.818557  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:53.976619  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:54.081420  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:54.318813  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:54.475954  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:54.582481  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:54.818913  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:54.976100  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:55.082174  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:55.287305  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:55.318058  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:55.476320  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:55.582149  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:55.826567  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:55.981042  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:56.081276  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:56.319521  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:56.475650  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:56.581596  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:56.818574  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:56.975996  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:57.082643  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:57.626615  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:57.627586  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:57.627720  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:57.631472  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:57.818870  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:57.979364  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:58.081587  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:58.318085  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:58.476312  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:58.581156  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:58.826426  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:58.978242  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:59.081303  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:59.318911  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:59.478537  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:59.582057  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:59.785115  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:59.818776  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:59.980469  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:34:00.082381  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:00.319529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:00.477985  384891 kapi.go:107] duration metric: took 1m28.006709237s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 11:34:00.581976  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:00.819378  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:01.082606  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:01.319729  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:01.582377  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:01.785853  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:34:01.819079  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:02.082352  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:02.318806  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:02.583133  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:02.819833  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:03.082070  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:03.319057  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:03.582749  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:03.818867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:04.081986  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:04.285341  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:34:04.318345  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:04.581902  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:04.818896  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:05.082540  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:05.319169  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:05.582754  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:05.818610  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:06.081323  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:06.286945  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:34:06.319553  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:06.581733  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:06.819609  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:07.081656  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:07.288453  384891 pod_ready.go:93] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"True"
	I1007 11:34:07.288493  384891 pod_ready.go:82] duration metric: took 1m36.010528889s for pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace to be "Ready" ...
	I1007 11:34:07.288510  384891 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-8tqmv" in "kube-system" namespace to be "Ready" ...
	I1007 11:34:07.299285  384891 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-8tqmv" in "kube-system" namespace has status "Ready":"True"
	I1007 11:34:07.299313  384891 pod_ready.go:82] duration metric: took 10.79378ms for pod "nvidia-device-plugin-daemonset-8tqmv" in "kube-system" namespace to be "Ready" ...
	I1007 11:34:07.299332  384891 pod_ready.go:39] duration metric: took 1m37.211435839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:34:07.299353  384891 api_server.go:52] waiting for apiserver process to appear ...
	I1007 11:34:07.299401  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:34:07.299455  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:34:07.321320  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:07.350199  384891 cri.go:89] found id: "c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:07.350228  384891 cri.go:89] found id: ""
	I1007 11:34:07.350239  384891 logs.go:282] 1 containers: [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8]
	I1007 11:34:07.350311  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.355340  384891 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:34:07.355425  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:34:07.403255  384891 cri.go:89] found id: "1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:07.403284  384891 cri.go:89] found id: ""
	I1007 11:34:07.403293  384891 logs.go:282] 1 containers: [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4]
	I1007 11:34:07.403356  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.408181  384891 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:34:07.408259  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:34:07.456781  384891 cri.go:89] found id: "0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:07.456810  384891 cri.go:89] found id: ""
	I1007 11:34:07.456821  384891 logs.go:282] 1 containers: [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965]
	I1007 11:34:07.456880  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.461365  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:34:07.461432  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:34:07.503869  384891 cri.go:89] found id: "c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:07.503900  384891 cri.go:89] found id: ""
	I1007 11:34:07.503911  384891 logs.go:282] 1 containers: [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a]
	I1007 11:34:07.503986  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.508824  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:34:07.508912  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:34:07.553417  384891 cri.go:89] found id: "07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:07.553445  384891 cri.go:89] found id: ""
	I1007 11:34:07.553453  384891 logs.go:282] 1 containers: [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e]
	I1007 11:34:07.553507  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.558607  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:34:07.558691  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:34:07.582482  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:07.609104  384891 cri.go:89] found id: "8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:07.609133  384891 cri.go:89] found id: ""
	I1007 11:34:07.609143  384891 logs.go:282] 1 containers: [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae]
	I1007 11:34:07.609209  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.614014  384891 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:34:07.614095  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:34:07.669307  384891 cri.go:89] found id: ""
	I1007 11:34:07.669339  384891 logs.go:282] 0 containers: []
	W1007 11:34:07.669348  384891 logs.go:284] No container was found matching "kindnet"
	I1007 11:34:07.669360  384891 logs.go:123] Gathering logs for dmesg ...
	I1007 11:34:07.669374  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:34:07.692510  384891 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:34:07.692553  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:34:07.820538  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:07.833306  384891 logs.go:123] Gathering logs for kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] ...
	I1007 11:34:07.833344  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:07.881834  384891 logs.go:123] Gathering logs for kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] ...
	I1007 11:34:07.881872  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:07.922102  384891 logs.go:123] Gathering logs for kubelet ...
	I1007 11:34:07.922135  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 11:34:07.994930  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:07.995159  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:08.014966  384891 logs.go:123] Gathering logs for coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] ...
	I1007 11:34:08.015007  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:08.059810  384891 logs.go:123] Gathering logs for kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] ...
	I1007 11:34:08.059846  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:08.082446  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:08.118806  384891 logs.go:123] Gathering logs for kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] ...
	I1007 11:34:08.118857  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:08.183364  384891 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:34:08.183410  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:34:08.319460  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:08.583736  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:08.819563  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:08.851907  384891 logs.go:123] Gathering logs for container status ...
	I1007 11:34:08.851975  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:34:08.905544  384891 logs.go:123] Gathering logs for etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] ...
	I1007 11:34:08.905576  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:08.973774  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:08.973822  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 11:34:08.973898  384891 out.go:270] X Problems detected in kubelet:
	W1007 11:34:08.973917  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:08.973935  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:08.973949  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:08.973962  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:34:09.082037  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:09.319301  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:09.582172  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:09.818720  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:10.083461  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:10.318771  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:10.582330  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:10.819089  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:11.081911  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:11.321748  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:11.581492  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:11.818375  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:12.082063  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:12.319965  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:12.582369  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:12.819383  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:13.082206  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:13.318240  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:13.583364  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:13.818316  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:14.081551  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:14.318945  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:14.581789  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:14.819411  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:15.081875  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:15.318853  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:15.582528  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:15.818834  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:16.081977  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:16.318787  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:16.582509  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:16.818784  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:17.082467  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:17.319180  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:17.583829  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:17.819020  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:18.083259  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:18.318588  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:18.585693  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:18.818464  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:18.975488  384891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:34:18.998847  384891 api_server.go:72] duration metric: took 1m57.962235499s to wait for apiserver process to appear ...
	I1007 11:34:18.998888  384891 api_server.go:88] waiting for apiserver healthz status ...
	I1007 11:34:18.998936  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:34:18.999018  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:34:19.040445  384891 cri.go:89] found id: "c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:19.040469  384891 cri.go:89] found id: ""
	I1007 11:34:19.040485  384891 logs.go:282] 1 containers: [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8]
	I1007 11:34:19.040551  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.046554  384891 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:34:19.046621  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:34:19.082671  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:19.092133  384891 cri.go:89] found id: "1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:19.092166  384891 cri.go:89] found id: ""
	I1007 11:34:19.092176  384891 logs.go:282] 1 containers: [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4]
	I1007 11:34:19.092241  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.096808  384891 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:34:19.096908  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:34:19.138989  384891 cri.go:89] found id: "0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:19.139023  384891 cri.go:89] found id: ""
	I1007 11:34:19.139035  384891 logs.go:282] 1 containers: [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965]
	I1007 11:34:19.139100  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.143619  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:34:19.143693  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:34:19.191484  384891 cri.go:89] found id: "c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:19.191512  384891 cri.go:89] found id: ""
	I1007 11:34:19.191523  384891 logs.go:282] 1 containers: [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a]
	I1007 11:34:19.191676  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.196448  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:34:19.196521  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:34:19.242455  384891 cri.go:89] found id: "07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:19.242492  384891 cri.go:89] found id: ""
	I1007 11:34:19.242503  384891 logs.go:282] 1 containers: [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e]
	I1007 11:34:19.242564  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.248534  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:34:19.248629  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:34:19.291085  384891 cri.go:89] found id: "8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:19.291114  384891 cri.go:89] found id: ""
	I1007 11:34:19.291124  384891 logs.go:282] 1 containers: [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae]
	I1007 11:34:19.291194  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.295722  384891 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:34:19.295810  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:34:19.318088  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:19.340630  384891 cri.go:89] found id: ""
	I1007 11:34:19.340658  384891 logs.go:282] 0 containers: []
	W1007 11:34:19.340668  384891 logs.go:284] No container was found matching "kindnet"
	I1007 11:34:19.340678  384891 logs.go:123] Gathering logs for kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] ...
	I1007 11:34:19.340701  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:19.398366  384891 logs.go:123] Gathering logs for kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] ...
	I1007 11:34:19.398413  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:19.441039  384891 logs.go:123] Gathering logs for kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] ...
	I1007 11:34:19.441071  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:19.515511  384891 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:34:19.515559  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:34:19.581392  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:19.820008  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:20.082996  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:20.318698  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:20.371437  384891 logs.go:123] Gathering logs for kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] ...
	I1007 11:34:20.371566  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:20.421572  384891 logs.go:123] Gathering logs for container status ...
	I1007 11:34:20.421622  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:34:20.473855  384891 logs.go:123] Gathering logs for kubelet ...
	I1007 11:34:20.473898  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 11:34:20.539155  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:20.539346  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:20.560434  384891 logs.go:123] Gathering logs for dmesg ...
	I1007 11:34:20.560477  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:34:20.578609  384891 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:34:20.578644  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:34:20.582162  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:20.705740  384891 logs.go:123] Gathering logs for etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] ...
	I1007 11:34:20.705772  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:20.771436  384891 logs.go:123] Gathering logs for coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] ...
	I1007 11:34:20.771482  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:20.817335  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:20.817370  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 11:34:20.817442  384891 out.go:270] X Problems detected in kubelet:
	W1007 11:34:20.817457  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:20.817470  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:20.817479  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:20.817488  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:34:20.818512  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:21.082056  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:21.318867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:21.582262  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:21.818795  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:22.083232  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:22.318990  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:22.582413  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:22.819076  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:23.082537  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:23.318303  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:23.583644  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:23.818519  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:24.081687  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:24.318430  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:24.582120  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:24.819111  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:25.086365  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:25.320747  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:25.582278  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:25.819707  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:26.082436  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:26.319403  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:26.582434  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:26.819099  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:27.082857  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:27.318289  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:27.581568  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:27.819777  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:28.081999  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:28.318751  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:28.582679  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:28.818757  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:29.082323  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:29.318830  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:29.582031  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:29.818723  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:30.082134  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:30.319885  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:30.581940  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:30.818806  384891 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1007 11:34:30.824530  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:30.825860  384891 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1007 11:34:30.826750  384891 api_server.go:141] control plane version: v1.31.1
	I1007 11:34:30.826782  384891 api_server.go:131] duration metric: took 11.827885179s to wait for apiserver health ...
	I1007 11:34:30.826793  384891 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 11:34:30.826818  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:34:30.826869  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:34:30.868009  384891 cri.go:89] found id: "c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:30.868043  384891 cri.go:89] found id: ""
	I1007 11:34:30.868054  384891 logs.go:282] 1 containers: [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8]
	I1007 11:34:30.868116  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:30.872897  384891 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:34:30.872982  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:34:30.921766  384891 cri.go:89] found id: "1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:30.921797  384891 cri.go:89] found id: ""
	I1007 11:34:30.921807  384891 logs.go:282] 1 containers: [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4]
	I1007 11:34:30.921872  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:30.926658  384891 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:34:30.926751  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:34:30.967084  384891 cri.go:89] found id: "0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:30.967110  384891 cri.go:89] found id: ""
	I1007 11:34:30.967121  384891 logs.go:282] 1 containers: [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965]
	I1007 11:34:30.967184  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:30.971720  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:34:30.971806  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:34:31.014014  384891 cri.go:89] found id: "c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:31.014051  384891 cri.go:89] found id: ""
	I1007 11:34:31.014063  384891 logs.go:282] 1 containers: [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a]
	I1007 11:34:31.014128  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:31.019324  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:34:31.019476  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:34:31.061685  384891 cri.go:89] found id: "07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:31.061719  384891 cri.go:89] found id: ""
	I1007 11:34:31.061730  384891 logs.go:282] 1 containers: [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e]
	I1007 11:34:31.061791  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:31.066589  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:34:31.066673  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:34:31.081745  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:31.112923  384891 cri.go:89] found id: "8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:31.112948  384891 cri.go:89] found id: ""
	I1007 11:34:31.112957  384891 logs.go:282] 1 containers: [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae]
	I1007 11:34:31.113010  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:31.118016  384891 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:34:31.118089  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:34:31.171358  384891 cri.go:89] found id: ""
	I1007 11:34:31.171390  384891 logs.go:282] 0 containers: []
	W1007 11:34:31.171402  384891 logs.go:284] No container was found matching "kindnet"
	I1007 11:34:31.171415  384891 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:34:31.171439  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:34:31.307909  384891 logs.go:123] Gathering logs for kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] ...
	I1007 11:34:31.307947  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:31.318066  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:31.370102  384891 logs.go:123] Gathering logs for coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] ...
	I1007 11:34:31.370145  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:31.412898  384891 logs.go:123] Gathering logs for kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] ...
	I1007 11:34:31.412929  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:31.455361  384891 logs.go:123] Gathering logs for kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] ...
	I1007 11:34:31.455399  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:31.525681  384891 logs.go:123] Gathering logs for container status ...
	I1007 11:34:31.525726  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:34:31.581299  384891 logs.go:123] Gathering logs for kubelet ...
	I1007 11:34:31.581352  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 11:34:31.582018  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1007 11:34:31.650024  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:31.650226  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:31.671782  384891 logs.go:123] Gathering logs for dmesg ...
	I1007 11:34:31.671817  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:34:31.692198  384891 logs.go:123] Gathering logs for etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] ...
	I1007 11:34:31.692235  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:31.760832  384891 logs.go:123] Gathering logs for kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] ...
	I1007 11:34:31.760880  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:31.809091  384891 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:34:31.809129  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:34:31.818667  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:32.083426  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:32.318110  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:32.582254  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:32.686330  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:32.686374  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 11:34:32.686450  384891 out.go:270] X Problems detected in kubelet:
	W1007 11:34:32.686461  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:32.686473  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:32.686481  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:32.686488  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:34:32.820112  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:33.082098  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:33.319357  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:33.583417  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:33.819012  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:34.082102  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:34.318854  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:34.582183  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:34.819365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:35.082034  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:35.318900  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:35.582595  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:35.819015  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:36.081981  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:36.319063  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:36.582084  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:36.818989  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:37.082637  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:37.318307  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:37.582037  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:37.819608  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:38.082058  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:38.319071  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:38.582896  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:38.818216  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:39.082926  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:39.318258  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:39.582671  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:39.819037  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:40.082183  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:40.319106  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:40.582450  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:40.818611  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:41.082311  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:41.319060  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:41.582150  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:41.819047  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:42.081964  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:42.318809  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:42.582264  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:42.694665  384891 system_pods.go:59] 17 kube-system pods found
	I1007 11:34:42.694702  384891 system_pods.go:61] "coredns-7c65d6cfc9-9n6rn" [a65cd5da-6560-4c5a-9311-ca855450e9a9] Running
	I1007 11:34:42.694707  384891 system_pods.go:61] "csi-hostpath-attacher-0" [91820122-4ed3-4251-b1fd-f63756f7e814] Running
	I1007 11:34:42.694711  384891 system_pods.go:61] "csi-hostpath-resizer-0" [2a120d65-04bc-42e4-b324-49d7300d4ed8] Running
	I1007 11:34:42.694716  384891 system_pods.go:61] "csi-hostpathplugin-d8rpq" [52c9f352-e70d-47a1-907f-b13d53f6bc60] Running
	I1007 11:34:42.694719  384891 system_pods.go:61] "etcd-addons-246818" [bb627733-dff2-491c-8308-3ac74e5903dc] Running
	I1007 11:34:42.694723  384891 system_pods.go:61] "kube-apiserver-addons-246818" [e9c4665f-2478-4c1f-9cbf-0619491257dd] Running
	I1007 11:34:42.694726  384891 system_pods.go:61] "kube-controller-manager-addons-246818" [5c61899b-9f40-4b5d-b0ab-a796a3c1c8ba] Running
	I1007 11:34:42.694730  384891 system_pods.go:61] "kube-ingress-dns-minikube" [830d0746-7b01-4a11-a0ee-8f9298e96c17] Running
	I1007 11:34:42.694733  384891 system_pods.go:61] "kube-proxy-l8kql" [847b99db-d42a-483a-87e5-f70b492c2430] Running
	I1007 11:34:42.694738  384891 system_pods.go:61] "kube-scheduler-addons-246818" [1fbb2a15-cc03-4580-94f0-5afee1897222] Running
	I1007 11:34:42.694741  384891 system_pods.go:61] "metrics-server-84c5f94fbc-q6j6p" [f37e3b43-4ce4-4879-babb-e6efdf0f3163] Running
	I1007 11:34:42.694746  384891 system_pods.go:61] "nvidia-device-plugin-daemonset-8tqmv" [69715854-4ded-41a3-83c7-1c8c927935d3] Running
	I1007 11:34:42.694749  384891 system_pods.go:61] "registry-66c9cd494c-pdbhh" [0abb32c0-d3dc-447d-a3b9-d672a6f088ff] Running
	I1007 11:34:42.694752  384891 system_pods.go:61] "registry-proxy-nczxq" [f47e8fd0-0149-4ade-8c43-90e4eeb9b7cf] Running
	I1007 11:34:42.694756  384891 system_pods.go:61] "snapshot-controller-56fcc65765-q9hxr" [189d7791-dda8-49aa-b59d-36fdbc31d559] Running
	I1007 11:34:42.694759  384891 system_pods.go:61] "snapshot-controller-56fcc65765-q9tkd" [1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91] Running
	I1007 11:34:42.694763  384891 system_pods.go:61] "storage-provisioner" [2f27f3bc-8533-41d5-b82e-373f84b67952] Running
	I1007 11:34:42.694769  384891 system_pods.go:74] duration metric: took 11.867969785s to wait for pod list to return data ...
	I1007 11:34:42.694780  384891 default_sa.go:34] waiting for default service account to be created ...
	I1007 11:34:42.697608  384891 default_sa.go:45] found service account: "default"
	I1007 11:34:42.697642  384891 default_sa.go:55] duration metric: took 2.852196ms for default service account to be created ...
	I1007 11:34:42.697656  384891 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 11:34:42.706719  384891 system_pods.go:86] 17 kube-system pods found
	I1007 11:34:42.706756  384891 system_pods.go:89] "coredns-7c65d6cfc9-9n6rn" [a65cd5da-6560-4c5a-9311-ca855450e9a9] Running
	I1007 11:34:42.706762  384891 system_pods.go:89] "csi-hostpath-attacher-0" [91820122-4ed3-4251-b1fd-f63756f7e814] Running
	I1007 11:34:42.706766  384891 system_pods.go:89] "csi-hostpath-resizer-0" [2a120d65-04bc-42e4-b324-49d7300d4ed8] Running
	I1007 11:34:42.706770  384891 system_pods.go:89] "csi-hostpathplugin-d8rpq" [52c9f352-e70d-47a1-907f-b13d53f6bc60] Running
	I1007 11:34:42.706774  384891 system_pods.go:89] "etcd-addons-246818" [bb627733-dff2-491c-8308-3ac74e5903dc] Running
	I1007 11:34:42.706778  384891 system_pods.go:89] "kube-apiserver-addons-246818" [e9c4665f-2478-4c1f-9cbf-0619491257dd] Running
	I1007 11:34:42.706782  384891 system_pods.go:89] "kube-controller-manager-addons-246818" [5c61899b-9f40-4b5d-b0ab-a796a3c1c8ba] Running
	I1007 11:34:42.706788  384891 system_pods.go:89] "kube-ingress-dns-minikube" [830d0746-7b01-4a11-a0ee-8f9298e96c17] Running
	I1007 11:34:42.706791  384891 system_pods.go:89] "kube-proxy-l8kql" [847b99db-d42a-483a-87e5-f70b492c2430] Running
	I1007 11:34:42.706795  384891 system_pods.go:89] "kube-scheduler-addons-246818" [1fbb2a15-cc03-4580-94f0-5afee1897222] Running
	I1007 11:34:42.706800  384891 system_pods.go:89] "metrics-server-84c5f94fbc-q6j6p" [f37e3b43-4ce4-4879-babb-e6efdf0f3163] Running
	I1007 11:34:42.706805  384891 system_pods.go:89] "nvidia-device-plugin-daemonset-8tqmv" [69715854-4ded-41a3-83c7-1c8c927935d3] Running
	I1007 11:34:42.706808  384891 system_pods.go:89] "registry-66c9cd494c-pdbhh" [0abb32c0-d3dc-447d-a3b9-d672a6f088ff] Running
	I1007 11:34:42.706812  384891 system_pods.go:89] "registry-proxy-nczxq" [f47e8fd0-0149-4ade-8c43-90e4eeb9b7cf] Running
	I1007 11:34:42.706815  384891 system_pods.go:89] "snapshot-controller-56fcc65765-q9hxr" [189d7791-dda8-49aa-b59d-36fdbc31d559] Running
	I1007 11:34:42.706819  384891 system_pods.go:89] "snapshot-controller-56fcc65765-q9tkd" [1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91] Running
	I1007 11:34:42.706823  384891 system_pods.go:89] "storage-provisioner" [2f27f3bc-8533-41d5-b82e-373f84b67952] Running
	I1007 11:34:42.706835  384891 system_pods.go:126] duration metric: took 9.170306ms to wait for k8s-apps to be running ...
	I1007 11:34:42.706847  384891 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 11:34:42.706901  384891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:34:42.725146  384891 system_svc.go:56] duration metric: took 18.286276ms WaitForService to wait for kubelet
	I1007 11:34:42.725182  384891 kubeadm.go:582] duration metric: took 2m21.688585174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:34:42.725203  384891 node_conditions.go:102] verifying NodePressure condition ...
	I1007 11:34:42.728139  384891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 11:34:42.728194  384891 node_conditions.go:123] node cpu capacity is 2
	I1007 11:34:42.728211  384891 node_conditions.go:105] duration metric: took 3.001618ms to run NodePressure ...
	I1007 11:34:42.728226  384891 start.go:241] waiting for startup goroutines ...
	I1007 11:34:42.819517  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:43.082232  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:43.319050  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:43.582210  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:43.819348  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:44.081779  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:44.318592  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:44.581627  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:44.818069  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:45.082710  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:45.319371  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:45.581377  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:45.818428  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:46.083012  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:46.320632  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:46.581260  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:46.819209  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:47.082692  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:47.318983  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:47.582357  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:47.823398  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:48.082344  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:48.318267  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:48.581439  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:48.820231  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:49.082123  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:49.318989  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:49.582868  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:49.820088  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:50.084119  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:50.318944  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:50.581942  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:50.818634  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:51.082987  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:51.319771  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:51.582116  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:51.819251  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:52.082449  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:52.318176  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:52.582176  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:52.819387  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:53.081651  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:53.319024  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:53.582594  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:53.819107  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:54.082146  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:54.318787  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:54.582627  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:54.818201  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:55.204294  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:55.319426  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:55.583686  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:55.819569  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:56.082731  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:56.318631  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:56.581113  384891 kapi.go:107] duration metric: took 2m26.503967901s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 11:34:56.819419  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:57.319107  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:57.818908  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:58.322546  384891 kapi.go:107] duration metric: took 2m24.007812557s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 11:34:58.323908  384891 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-246818 cluster.
	I1007 11:34:58.325270  384891 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 11:34:58.326576  384891 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 11:34:58.328149  384891 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, metrics-server, inspektor-gadget, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1007 11:34:58.329558  384891 addons.go:510] duration metric: took 2m37.292909623s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin metrics-server inspektor-gadget cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1007 11:34:58.329605  384891 start.go:246] waiting for cluster config update ...
	I1007 11:34:58.329625  384891 start.go:255] writing updated cluster config ...
	I1007 11:34:58.329888  384891 ssh_runner.go:195] Run: rm -f paused
	I1007 11:34:58.382842  384891 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 11:34:58.384942  384891 out.go:177] * Done! kubectl is now configured to use "addons-246818" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.246630967Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301553246601961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8c839fd-5bbd-4c2a-b3f1-0756ec0382d3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.247205552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09dcd01c-453f-467c-a202-87eaad6e93ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.247336176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09dcd01c-453f-467c-a202-87eaad6e93ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.247861581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578cd6b64b7a33b04973e3652c4fd50338ce909ed99d39a329e3b2681b9b15b2,PodSandboxId:c7419157666064339311393cac321367db6a60fbd7fb2da2eedbc1154c20891e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728300895376227796,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-ch9h5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b940f1da-e470-4328-ad14-6d76d655576f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cc
cc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-r
egistrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f
268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1728300831727592399,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c71
1081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:94584fb7b65f4a04b3332cb05dfe5d9a03be61d72cc13b1e41d2e507bcc634a9,PodSandboxId:eb3b6df00e8a2ed242dae4fd1b4b14f99231068d935ae3edb6eb6bf1c9951f19,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728300826238472452,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d9x2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 095cd04f-1405-4793-b2fe-2180ff1c6b67,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6901de250d5db80b8633ea66bb45a95a5f165905e0b2fce6b7dbf7c86a9ce1a6,PodSandboxId:4a1e852c44add89b6859d4040a728f7b644afab4df20360b327d84ebb4ce6a82,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728300826079533365,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ghxb6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5fe5509d-bb38-4bd6-a85d-201faab48723,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e,PodSandboxId:ddcecf5804f3432f425ed1b78bdd0add063adc43981b8616db59207cbca9cbdb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728300824605663492,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6kwqv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 061506d6-ef07-4852-b9f4-9c28e30da0be,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash
: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d
-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85661d1841a178ef76cf9daa9c30150b7e5f427eb86c2a77593ab5a880ef168,PodSandboxId:058d68203dc5a10d4ad6bf69b9b157da8f2de1df0dc98b9b6a2db3c5374fe3ec,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728300782851943338,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6j6p,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f37e3b43-4ce4-4879-babb-e6efdf0f3163,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233f1b64501533871bb65cea805535359df19d9bc4fb45721cb51180629e9cda,PodSandboxId:2050a49c768d532fa3c64b85c357983b737f38d134d5b26475c463e311b349e1,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1728300771610911528,Labels:map[string]string{io.
kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-zg2hq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee95e639-975d-4172-9950-2f0bcdf275d7,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f8820f7e73ef82f74fcb6977b8bd2c946c48c56a45918f4dab4700a51bf037,PodSandboxId:bd8b37277c1840565bec2ee1b43f28b7b24e48ec1e2ddcead00d12af10d36c37,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728300758717575189,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 830d0746-7b01-4a11-a0ee-8f9298e96c17,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe117735565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Imag
eSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5
f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a
782c114112ef6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodS
andboxId:4af52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224
e02d8,PodSandboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxI
d:660fb1dd2d72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99
fa76880ed25bbc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09dcd01c-453f-467c-a202-87eaad6e93ce name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.291420751Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e2e8e40-0b8f-48f3-9ee5-1dbedb0951a4 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.291495154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e2e8e40-0b8f-48f3-9ee5-1dbedb0951a4 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.292805504Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10bf57b1-817c-4236-8901-1cd31bf78a51 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.293956690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301553293894766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10bf57b1-817c-4236-8901-1cd31bf78a51 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.294995914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7b1eb7f-9a93-40b2-8329-3981c6b1f3fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.295073086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7b1eb7f-9a93-40b2-8329-3981c6b1f3fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.295694469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578cd6b64b7a33b04973e3652c4fd50338ce909ed99d39a329e3b2681b9b15b2,PodSandboxId:c7419157666064339311393cac321367db6a60fbd7fb2da2eedbc1154c20891e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728300895376227796,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-ch9h5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b940f1da-e470-4328-ad14-6d76d655576f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cc
cc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-r
egistrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f
268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1728300831727592399,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c71
1081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:94584fb7b65f4a04b3332cb05dfe5d9a03be61d72cc13b1e41d2e507bcc634a9,PodSandboxId:eb3b6df00e8a2ed242dae4fd1b4b14f99231068d935ae3edb6eb6bf1c9951f19,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728300826238472452,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d9x2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 095cd04f-1405-4793-b2fe-2180ff1c6b67,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6901de250d5db80b8633ea66bb45a95a5f165905e0b2fce6b7dbf7c86a9ce1a6,PodSandboxId:4a1e852c44add89b6859d4040a728f7b644afab4df20360b327d84ebb4ce6a82,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728300826079533365,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ghxb6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5fe5509d-bb38-4bd6-a85d-201faab48723,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e,PodSandboxId:ddcecf5804f3432f425ed1b78bdd0add063adc43981b8616db59207cbca9cbdb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728300824605663492,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6kwqv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 061506d6-ef07-4852-b9f4-9c28e30da0be,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash
: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d
-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85661d1841a178ef76cf9daa9c30150b7e5f427eb86c2a77593ab5a880ef168,PodSandboxId:058d68203dc5a10d4ad6bf69b9b157da8f2de1df0dc98b9b6a2db3c5374fe3ec,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728300782851943338,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6j6p,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f37e3b43-4ce4-4879-babb-e6efdf0f3163,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233f1b64501533871bb65cea805535359df19d9bc4fb45721cb51180629e9cda,PodSandboxId:2050a49c768d532fa3c64b85c357983b737f38d134d5b26475c463e311b349e1,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1728300771610911528,Labels:map[string]string{io.
kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-zg2hq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee95e639-975d-4172-9950-2f0bcdf275d7,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f8820f7e73ef82f74fcb6977b8bd2c946c48c56a45918f4dab4700a51bf037,PodSandboxId:bd8b37277c1840565bec2ee1b43f28b7b24e48ec1e2ddcead00d12af10d36c37,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728300758717575189,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 830d0746-7b01-4a11-a0ee-8f9298e96c17,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe117735565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Imag
eSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5
f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a
782c114112ef6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodS
andboxId:4af52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224
e02d8,PodSandboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxI
d:660fb1dd2d72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99
fa76880ed25bbc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7b1eb7f-9a93-40b2-8329-3981c6b1f3fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.330241892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a30ac55f-cd27-4dcf-ae2c-b139658925da name=/runtime.v1.RuntimeService/Version
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.330378222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a30ac55f-cd27-4dcf-ae2c-b139658925da name=/runtime.v1.RuntimeService/Version
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.331426363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3987b619-685a-4f51-927e-07abd9aa1dce name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.332489890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301553332463774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3987b619-685a-4f51-927e-07abd9aa1dce name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.333388044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b91bcfd6-05c6-4623-91c2-da34cbb7de12 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.333460893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b91bcfd6-05c6-4623-91c2-da34cbb7de12 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.334155385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578cd6b64b7a33b04973e3652c4fd50338ce909ed99d39a329e3b2681b9b15b2,PodSandboxId:c7419157666064339311393cac321367db6a60fbd7fb2da2eedbc1154c20891e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728300895376227796,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-ch9h5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b940f1da-e470-4328-ad14-6d76d655576f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cc
cc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-r
egistrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f
268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1728300831727592399,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c71
1081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:94584fb7b65f4a04b3332cb05dfe5d9a03be61d72cc13b1e41d2e507bcc634a9,PodSandboxId:eb3b6df00e8a2ed242dae4fd1b4b14f99231068d935ae3edb6eb6bf1c9951f19,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728300826238472452,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d9x2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 095cd04f-1405-4793-b2fe-2180ff1c6b67,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6901de250d5db80b8633ea66bb45a95a5f165905e0b2fce6b7dbf7c86a9ce1a6,PodSandboxId:4a1e852c44add89b6859d4040a728f7b644afab4df20360b327d84ebb4ce6a82,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728300826079533365,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ghxb6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5fe5509d-bb38-4bd6-a85d-201faab48723,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e,PodSandboxId:ddcecf5804f3432f425ed1b78bdd0add063adc43981b8616db59207cbca9cbdb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728300824605663492,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6kwqv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 061506d6-ef07-4852-b9f4-9c28e30da0be,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash
: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d
-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85661d1841a178ef76cf9daa9c30150b7e5f427eb86c2a77593ab5a880ef168,PodSandboxId:058d68203dc5a10d4ad6bf69b9b157da8f2de1df0dc98b9b6a2db3c5374fe3ec,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728300782851943338,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6j6p,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f37e3b43-4ce4-4879-babb-e6efdf0f3163,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233f1b64501533871bb65cea805535359df19d9bc4fb45721cb51180629e9cda,PodSandboxId:2050a49c768d532fa3c64b85c357983b737f38d134d5b26475c463e311b349e1,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1728300771610911528,Labels:map[string]string{io.
kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-zg2hq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee95e639-975d-4172-9950-2f0bcdf275d7,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f8820f7e73ef82f74fcb6977b8bd2c946c48c56a45918f4dab4700a51bf037,PodSandboxId:bd8b37277c1840565bec2ee1b43f28b7b24e48ec1e2ddcead00d12af10d36c37,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728300758717575189,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 830d0746-7b01-4a11-a0ee-8f9298e96c17,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe117735565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Imag
eSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5
f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a
782c114112ef6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodS
andboxId:4af52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224
e02d8,PodSandboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxI
d:660fb1dd2d72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99
fa76880ed25bbc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b91bcfd6-05c6-4623-91c2-da34cbb7de12 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.374621566Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e902549e-127a-40c0-88e7-d649986527bc name=/runtime.v1.RuntimeService/Version
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.374713433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e902549e-127a-40c0-88e7-d649986527bc name=/runtime.v1.RuntimeService/Version
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.377887567Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23584143-3901-4c72-94f8-b6bf58f60307 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.379576028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301553379543910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23584143-3901-4c72-94f8-b6bf58f60307 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.382703670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8ac90e9-30ee-43bb-9596-d39cb0664a6c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.382786395Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8ac90e9-30ee-43bb-9596-d39cb0664a6c name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:45:53 addons-246818 crio[659]: time="2024-10-07 11:45:53.383492583Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:578cd6b64b7a33b04973e3652c4fd50338ce909ed99d39a329e3b2681b9b15b2,PodSandboxId:c7419157666064339311393cac321367db6a60fbd7fb2da2eedbc1154c20891e,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728300895376227796,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-ch9h5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b940f1da-e470-4328-ad14-6d76d655576f,},Annotations:map[string]strin
g{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,An
notations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cc
cc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-r
egistrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f
268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1728300831727592399,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c71
1081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod:
30,},},&Container{Id:94584fb7b65f4a04b3332cb05dfe5d9a03be61d72cc13b1e41d2e507bcc634a9,PodSandboxId:eb3b6df00e8a2ed242dae4fd1b4b14f99231068d935ae3edb6eb6bf1c9951f19,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728300826238472452,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d9x2b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 095cd04f-1405-4793-b2fe-2180ff1c6b67,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kub
ernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6901de250d5db80b8633ea66bb45a95a5f165905e0b2fce6b7dbf7c86a9ce1a6,PodSandboxId:4a1e852c44add89b6859d4040a728f7b644afab4df20360b327d84ebb4ce6a82,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728300826079533365,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ghxb6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5fe5509d-bb38-4bd6-a85d-201faab48723,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container
.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e,PodSandboxId:ddcecf5804f3432f425ed1b78bdd0add063adc43981b8616db59207cbca9cbdb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728300824605663492,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6kwqv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 061506d6-ef07-4852-b9f4-9c28e30da0be,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash
: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d
-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85661d1841a178ef76cf9daa9c30150b7e5f427eb86c2a77593ab5a880ef168,PodSandboxId:058d68203dc5a10d4ad6bf69b9b157da8f2de1df0dc98b9b6a2db3c5374fe3ec,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728300782851943338,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6j6p,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: f37e3b43-4ce4-4879-babb-e6efdf0f3163,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:233f1b64501533871bb65cea805535359df19d9bc4fb45721cb51180629e9cda,PodSandboxId:2050a49c768d532fa3c64b85c357983b737f38d134d5b26475c463e311b349e1,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2ebbaeeba1bd01a80097b8a834ff2a86498d89f3ea11470c0f0ba298931b7cb,State:CONTAINER_RUNNING,CreatedAt:1728300771610911528,Labels:map[string]string{io.
kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-5b584cc74-zg2hq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ee95e639-975d-4172-9950-2f0bcdf275d7,},Annotations:map[string]string{io.kubernetes.container.hash: fda6bb5,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51f8820f7e73ef82f74fcb6977b8bd2c946c48c56a45918f4dab4700a51bf037,PodSandboxId:bd8b37277c1840565bec2ee1b43f28b7b24e48ec1e2ddcead00d12af10d36c37,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728300758717575189,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 830d0746-7b01-4a11-a0ee-8f9298e96c17,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe117735565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&Imag
eSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5
f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a
782c114112ef6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodS
andboxId:4af52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224
e02d8,PodSandboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxI
d:660fb1dd2d72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99
fa76880ed25bbc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8ac90e9-30ee-43bb-9596-d39cb0664a6c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	018072193f0f9       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                                              2 minutes ago       Running             nginx                                    0                   d49de85842a0d       nginx
	578cd6b64b7a3       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             10 minutes ago      Running             controller                               0                   c741915766606       ingress-nginx-controller-bc57996ff-ch9h5
	57828bc9be9d5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	47756e0237323       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          11 minutes ago      Running             csi-provisioner                          0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	55b8cd6e90ea4       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            11 minutes ago      Running             liveness-probe                           0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	870a7af54cbdc       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           11 minutes ago      Running             hostpath                                 0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	d50c7be11d706       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                12 minutes ago      Running             node-driver-registrar                    0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	cb3c49e5a57e0       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             12 minutes ago      Running             csi-attacher                             0                   8bd0ba34143b7       csi-hostpath-attacher-0
	79ce9975927b1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   12 minutes ago      Running             csi-external-health-monitor-controller   0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	ea77c9e2ea78d       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              12 minutes ago      Running             csi-resizer                              0                   37fe00b1ba658       csi-hostpath-resizer-0
	94584fb7b65f4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   12 minutes ago      Exited              patch                                    0                   eb3b6df00e8a2       ingress-nginx-admission-patch-d9x2b
	6901de250d5db       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   12 minutes ago      Exited              create                                   0                   4a1e852c44add       ingress-nginx-admission-create-ghxb6
	1944cdab75253       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             12 minutes ago      Running             local-path-provisioner                   0                   ddcecf5804f34       local-path-provisioner-86d989889c-6kwqv
	72f67a14ad810       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      12 minutes ago      Running             volume-snapshot-controller               0                   b45a2edd29772       snapshot-controller-56fcc65765-q9tkd
	d4f852c16268c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      12 minutes ago      Running             volume-snapshot-controller               0                   ad1976920b544       snapshot-controller-56fcc65765-q9hxr
	b85661d1841a1       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        12 minutes ago      Running             metrics-server                           0                   058d68203dc5a       metrics-server-84c5f94fbc-q6j6p
	233f1b6450153       gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf                               13 minutes ago      Running             cloud-spanner-emulator                   0                   2050a49c768d5       cloud-spanner-emulator-5b584cc74-zg2hq
	51f8820f7e73e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             13 minutes ago      Running             minikube-ingress-dns                     0                   bd8b37277c184       kube-ingress-dns-minikube
	64b3fe56b0b4d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             13 minutes ago      Running             storage-provisioner                      0                   4d66856d95293       storage-provisioner
	0282c1110abcf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             13 minutes ago      Running             coredns                                  0                   81ad4b72c15e5       coredns-7c65d6cfc9-9n6rn
	07021166cf32e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             13 minutes ago      Running             kube-proxy                               0                   946e3367f9d80       kube-proxy-l8kql
	c89d7f8df3494       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             13 minutes ago      Running             kube-scheduler                           0                   660fb1dd2d723       kube-scheduler-addons-246818
	8f63af3616abb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             13 minutes ago      Running             kube-controller-manager                  0                   4af52b2553e39       kube-controller-manager-addons-246818
	1c2b9ede2bcb3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             13 minutes ago      Running             etcd                                     0                   d314e18e8281d       etcd-addons-246818
	c555e8eeff012       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             13 minutes ago      Running             kube-apiserver                           0                   9eda8e53f6a53       kube-apiserver-addons-246818
	
	
	==> coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] <==
	[INFO] 10.244.0.7:36417 - 18826 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000102704s
	[INFO] 10.244.0.7:36417 - 59820 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000171556s
	[INFO] 10.244.0.7:36417 - 55455 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000067091s
	[INFO] 10.244.0.7:36417 - 37440 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000127426s
	[INFO] 10.244.0.7:36417 - 57071 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000142361s
	[INFO] 10.244.0.7:36417 - 12379 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000150621s
	[INFO] 10.244.0.7:36417 - 34976 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000142221s
	[INFO] 10.244.0.7:37341 - 64030 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000106697s
	[INFO] 10.244.0.7:37341 - 64299 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00006075s
	[INFO] 10.244.0.7:43235 - 48545 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000047416s
	[INFO] 10.244.0.7:43235 - 48807 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077668s
	[INFO] 10.244.0.7:42457 - 52274 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038889s
	[INFO] 10.244.0.7:42457 - 52502 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000077159s
	[INFO] 10.244.0.7:46275 - 65152 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000053406s
	[INFO] 10.244.0.7:46275 - 65356 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104709s
	[INFO] 10.244.0.21:52186 - 58802 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000480629s
	[INFO] 10.244.0.21:39606 - 20549 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000135444s
	[INFO] 10.244.0.21:44303 - 37058 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00008359s
	[INFO] 10.244.0.21:47786 - 59818 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000288955s
	[INFO] 10.244.0.21:55581 - 60214 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000187035s
	[INFO] 10.244.0.21:42043 - 35787 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077688s
	[INFO] 10.244.0.21:58404 - 57333 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002316717s
	[INFO] 10.244.0.21:58056 - 31374 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002627126s
	[INFO] 10.244.0.24:56494 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000935731s
	[INFO] 10.244.0.24:38140 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000172895s
	
	
	==> describe nodes <==
	Name:               addons-246818
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-246818
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=addons-246818
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T11_32_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-246818
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-246818"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 11:32:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-246818
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 11:45:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 11:43:49 +0000   Mon, 07 Oct 2024 11:32:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 11:43:49 +0000   Mon, 07 Oct 2024 11:32:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 11:43:49 +0000   Mon, 07 Oct 2024 11:32:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 11:43:49 +0000   Mon, 07 Oct 2024 11:32:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    addons-246818
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a7e71aa8d4d4e109baa99d216d2d35a
	  System UUID:                5a7e71aa-8d4d-4e10-9baa-99d216d2d35a
	  Boot ID:                    1e1e4db1-e3af-4cfb-96cf-4a407d094dcb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     cloud-spanner-emulator-5b584cc74-zg2hq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-69v2g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-ch9h5                      100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-9n6rn                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 csi-hostpathplugin-d8rpq                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-addons-246818                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-246818                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-246818                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-l8kql                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-246818                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-q6j6p                               100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-q9hxr                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 snapshot-controller-56fcc65765-q9tkd                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  local-path-storage          helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  local-path-storage          local-path-provisioner-86d989889c-6kwqv                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node addons-246818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node addons-246818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node addons-246818 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m   kubelet          Node addons-246818 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node addons-246818 event: Registered Node addons-246818 in Controller
	
	
	==> dmesg <==
	[  +4.246010] systemd-fstab-generator[740]: Ignoring "noauto" option for root device
	[  +3.967870] systemd-fstab-generator[857]: Ignoring "noauto" option for root device
	[  +0.057943] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.987853] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +0.080762] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.824342] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.804390] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.058503] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.053847] kauditd_printk_skb: 81 callbacks suppressed
	[  +6.458158] kauditd_printk_skb: 78 callbacks suppressed
	[  +8.783756] kauditd_printk_skb: 22 callbacks suppressed
	[Oct 7 11:33] kauditd_printk_skb: 32 callbacks suppressed
	[ +42.426579] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.667940] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.940260] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 7 11:34] kauditd_printk_skb: 2 callbacks suppressed
	[ +48.225055] kauditd_printk_skb: 15 callbacks suppressed
	[Oct 7 11:35] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.972304] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 7 11:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.308875] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.325093] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.739676] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.143132] kauditd_printk_skb: 20 callbacks suppressed
	[Oct 7 11:45] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] <==
	{"level":"info","ts":"2024-10-07T11:33:57.597884Z","caller":"traceutil/trace.go:171","msg":"trace[1887693857] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-84c5f94fbc-q6j6p; range_end:; response_count:1; response_revision:1069; }","duration":"340.526541ms","start":"2024-10-07T11:33:57.257348Z","end":"2024-10-07T11:33:57.597875Z","steps":["trace[1887693857] 'agreement among raft nodes before linearized reading'  (duration: 340.389926ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.597927Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:33:57.257315Z","time spent":"340.604421ms","remote":"127.0.0.1:46982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":60,"response count":1,"response size":4589,"request content":"key:\"/registry/pods/kube-system/metrics-server-84c5f94fbc-q6j6p\" "}
	{"level":"warn","ts":"2024-10-07T11:33:57.598125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"331.971064ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:33:57.598168Z","caller":"traceutil/trace.go:171","msg":"trace[1610592843] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1069; }","duration":"332.016264ms","start":"2024-10-07T11:33:57.266146Z","end":"2024-10-07T11:33:57.598162Z","steps":["trace[1610592843] 'agreement among raft nodes before linearized reading'  (duration: 331.93248ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.598655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.659891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-10-07T11:33:57.598711Z","caller":"traceutil/trace.go:171","msg":"trace[1734221806] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1069; }","duration":"138.717909ms","start":"2024-10-07T11:33:57.459985Z","end":"2024-10-07T11:33:57.598703Z","steps":["trace[1734221806] 'agreement among raft nodes before linearized reading'  (duration: 138.621511ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.598843Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.683257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:33:57.598900Z","caller":"traceutil/trace.go:171","msg":"trace[1418508135] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"147.743392ms","start":"2024-10-07T11:33:57.451149Z","end":"2024-10-07T11:33:57.598892Z","steps":["trace[1418508135] 'agreement among raft nodes before linearized reading'  (duration: 147.663333ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.598872Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.22319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:33:57.598979Z","caller":"traceutil/trace.go:171","msg":"trace[1080542174] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"304.328661ms","start":"2024-10-07T11:33:57.294641Z","end":"2024-10-07T11:33:57.598970Z","steps":["trace[1080542174] 'agreement among raft nodes before linearized reading'  (duration: 304.214885ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.599028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:33:57.294615Z","time spent":"304.404536ms","remote":"127.0.0.1:46982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-07T11:34:55.174206Z","caller":"traceutil/trace.go:171","msg":"trace[2115876705] linearizableReadLoop","detail":"{readStateIndex:1224; appliedIndex:1223; }","duration":"118.016178ms","start":"2024-10-07T11:34:55.056148Z","end":"2024-10-07T11:34:55.174164Z","steps":["trace[2115876705] 'read index received'  (duration: 117.833312ms)","trace[2115876705] 'applied index is now lower than readState.Index'  (duration: 181.97µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T11:34:55.174576Z","caller":"traceutil/trace.go:171","msg":"trace[695574193] transaction","detail":"{read_only:false; response_revision:1176; number_of_response:1; }","duration":"175.99018ms","start":"2024-10-07T11:34:54.998568Z","end":"2024-10-07T11:34:55.174558Z","steps":["trace[695574193] 'process raft request'  (duration: 175.463941ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:34:55.174726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.52903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:34:55.175588Z","caller":"traceutil/trace.go:171","msg":"trace[1717354007] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1176; }","duration":"119.452051ms","start":"2024-10-07T11:34:55.056121Z","end":"2024-10-07T11:34:55.175573Z","steps":["trace[1717354007] 'agreement among raft nodes before linearized reading'  (duration: 118.512449ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T11:42:12.102784Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1443}
	{"level":"info","ts":"2024-10-07T11:42:12.139478Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1443,"took":"35.711987ms","hash":2488319999,"current-db-size-bytes":5902336,"current-db-size":"5.9 MB","current-db-size-in-use-bytes":2895872,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-10-07T11:42:12.139591Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2488319999,"revision":1443,"compact-revision":-1}
	{"level":"info","ts":"2024-10-07T11:43:18.110906Z","caller":"traceutil/trace.go:171","msg":"trace[325646834] linearizableReadLoop","detail":"{readStateIndex:2187; appliedIndex:2186; }","duration":"261.537214ms","start":"2024-10-07T11:43:17.849341Z","end":"2024-10-07T11:43:18.110878Z","steps":["trace[325646834] 'read index received'  (duration: 261.404239ms)","trace[325646834] 'applied index is now lower than readState.Index'  (duration: 132.582µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T11:43:18.111051Z","caller":"traceutil/trace.go:171","msg":"trace[977940061] transaction","detail":"{read_only:false; response_revision:2029; number_of_response:1; }","duration":"389.974345ms","start":"2024-10-07T11:43:17.721067Z","end":"2024-10-07T11:43:18.111041Z","steps":["trace[977940061] 'process raft request'  (duration: 389.72661ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:43:18.111247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.449824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"warn","ts":"2024-10-07T11:43:18.111341Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:43:17.721046Z","time spent":"390.024254ms","remote":"127.0.0.1:47046","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-khdavvvmsdaoutnun36u7rbvlu\" mod_revision:1961 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-khdavvvmsdaoutnun36u7rbvlu\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-khdavvvmsdaoutnun36u7rbvlu\" > >"}
	{"level":"info","ts":"2024-10-07T11:43:18.111353Z","caller":"traceutil/trace.go:171","msg":"trace[2088660386] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2029; }","duration":"175.589035ms","start":"2024-10-07T11:43:17.935755Z","end":"2024-10-07T11:43:18.111344Z","steps":["trace[2088660386] 'agreement among raft nodes before linearized reading'  (duration: 175.35089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:43:18.111578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.227097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-10-07T11:43:18.111600Z","caller":"traceutil/trace.go:171","msg":"trace[668771085] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:2029; }","duration":"262.260378ms","start":"2024-10-07T11:43:17.849335Z","end":"2024-10-07T11:43:18.111595Z","steps":["trace[668771085] 'agreement among raft nodes before linearized reading'  (duration: 262.135923ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:45:53 up 14 min,  0 users,  load average: 0.24, 0.44, 0.41
	Linux addons-246818 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1007 11:34:07.194787       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.180.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.180.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.180.136:443: connect: connection refused" logger="UnhandledError"
	W1007 11:34:08.194346       1 handler_proxy.go:99] no RequestInfo found in the context
	W1007 11:34:08.194391       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 11:34:08.194399       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1007 11:34:08.194468       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 11:34:08.195529       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 11:34:08.195605       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 11:34:12.209010       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 11:34:12.209502       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1007 11:34:12.210058       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.180.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.180.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.180.136:443: i/o timeout" logger="UnhandledError"
	I1007 11:34:12.229890       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1007 11:43:13.404446       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.123.192"}
	I1007 11:43:31.610761       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1007 11:43:31.793061       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.111.126"}
	I1007 11:43:35.415346       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1007 11:43:36.447558       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1007 11:45:52.143032       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.225.248"}
	
	
	==> kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] <==
	I1007 11:43:36.240778       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E1007 11:43:36.449971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 11:43:37.909862       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:43:37.909953       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 11:43:40.688882       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:43:40.689020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 11:43:45.518762       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W1007 11:43:46.070553       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:43:46.070601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 11:43:49.720322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-246818"
	I1007 11:43:51.354096       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1007 11:43:51.354220       1 shared_informer.go:320] Caches are synced for resource quota
	I1007 11:43:51.410734       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1007 11:43:51.410892       1 shared_informer.go:320] Caches are synced for garbage collector
	W1007 11:43:57.424954       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:43:57.425170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 11:44:20.980918       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:44:20.981062       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 11:45:12.140719       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:45:12.140965       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 11:45:48.645495       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:45:48.645655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 11:45:51.930930       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.003567ms"
	I1007 11:45:51.970786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.784177ms"
	I1007 11:45:51.971319       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="210.354µs"
	
	
	==> kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 11:32:23.243441       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 11:32:23.257157       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	E1007 11:32:23.257303       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:32:23.344187       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 11:32:23.344232       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 11:32:23.344291       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:32:23.348157       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:32:23.349642       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:32:23.349675       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:32:23.353061       1 config.go:199] "Starting service config controller"
	I1007 11:32:23.353107       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:32:23.353132       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:32:23.353136       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:32:23.353652       1 config.go:328] "Starting node config controller"
	I1007 11:32:23.353680       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:32:23.453423       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 11:32:23.453488       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:32:23.453719       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] <==
	W1007 11:32:13.856022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 11:32:13.856054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.719501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 11:32:14.719572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.721026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 11:32:14.721098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.734053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 11:32:14.734189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.747594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 11:32:14.747648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.853414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 11:32:14.853573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.943033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 11:32:14.943144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.979068       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 11:32:14.979173       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 11:32:15.003337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 11:32:15.003472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:15.093807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 11:32:15.093884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:15.121824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 11:32:15.121876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:15.145698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 11:32:15.145757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 11:32:17.639557       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 11:45:09 addons-246818 kubelet[1196]: E1007 11:45:09.371533    1196 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 07 11:45:09 addons-246818 kubelet[1196]: E1007 11:45:09.371600    1196 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 07 11:45:09 addons-246818 kubelet[1196]: E1007 11:45:09.371865    1196 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fs7ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationM
essagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(7dd2a563-8ddd-4a27-b356-1d2368c56e79): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 07 11:45:09 addons-246818 kubelet[1196]: E1007 11:45:09.373457    1196 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="7dd2a563-8ddd-4a27-b356-1d2368c56e79"
	Oct 07 11:45:09 addons-246818 kubelet[1196]: I1007 11:45:09.525405    1196 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-5b584cc74-zg2hq" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:45:13 addons-246818 kubelet[1196]: I1007 11:45:13.525894    1196 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:45:13 addons-246818 kubelet[1196]: E1007 11:45:13.528464    1196 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9d331845-59f4-4092-938c-97591d81951b"
	Oct 07 11:45:16 addons-246818 kubelet[1196]: E1007 11:45:16.557598    1196 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 11:45:16 addons-246818 kubelet[1196]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 11:45:16 addons-246818 kubelet[1196]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 11:45:16 addons-246818 kubelet[1196]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 11:45:16 addons-246818 kubelet[1196]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 11:45:16 addons-246818 kubelet[1196]: E1007 11:45:16.878500    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301516877635376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:45:16 addons-246818 kubelet[1196]: E1007 11:45:16.878708    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301516877635376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:45:24 addons-246818 kubelet[1196]: I1007 11:45:24.526010    1196 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:45:24 addons-246818 kubelet[1196]: E1007 11:45:24.526572    1196 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7dd2a563-8ddd-4a27-b356-1d2368c56e79"
	Oct 07 11:45:24 addons-246818 kubelet[1196]: E1007 11:45:24.540569    1196 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9d331845-59f4-4092-938c-97591d81951b"
	Oct 07 11:45:26 addons-246818 kubelet[1196]: E1007 11:45:26.881222    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301526880733421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:45:26 addons-246818 kubelet[1196]: E1007 11:45:26.881327    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301526880733421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:45:36 addons-246818 kubelet[1196]: I1007 11:45:36.525951    1196 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:45:36 addons-246818 kubelet[1196]: E1007 11:45:36.884141    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301536883582733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:45:36 addons-246818 kubelet[1196]: E1007 11:45:36.884252    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301536883582733,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:45:46 addons-246818 kubelet[1196]: E1007 11:45:46.887348    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301546886503858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:45:46 addons-246818 kubelet[1196]: E1007 11:45:46.887407    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301546886503858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:508143,},InodesUsed:&UInt64Value{Value:177,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:45:52 addons-246818 kubelet[1196]: I1007 11:45:52.038595    1196 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khkjd\" (UniqueName: \"kubernetes.io/projected/e73fb85b-64fc-40a4-983f-7278e1c3e3b7-kube-api-access-khkjd\") pod \"hello-world-app-55bf9c44b4-69v2g\" (UID: \"e73fb85b-64fc-40a4-983f-7278e1c3e3b7\") " pod="default/hello-world-app-55bf9c44b4-69v2g"
	
	
	==> storage-provisioner [64b3fe56b0b4d3fe117735565f2a0aeab451e5355bb33873142df1501d850d77] <==
	I1007 11:32:29.154950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 11:32:29.177899       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 11:32:29.177961       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 11:32:29.210127       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 11:32:29.210330       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-246818_c20493a4-b4c1-4d82-aa60-bc8f32f150cc!
	I1007 11:32:29.211374       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd5fb25e-787a-4fbd-bcb7-131f507b7555", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-246818_c20493a4-b4c1-4d82-aa60-bc8f32f150cc became leader
	I1007 11:32:29.318137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-246818_c20493a4-b4c1-4d82-aa60-bc8f32f150cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-246818 -n addons-246818
helpers_test.go:261: (dbg) Run:  kubectl --context addons-246818 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path ingress-nginx-admission-create-ghxb6 ingress-nginx-admission-patch-d9x2b helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-246818 describe pod busybox hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path ingress-nginx-admission-create-ghxb6 ingress-nginx-admission-patch-d9x2b helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-246818 describe pod busybox hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path ingress-nginx-admission-create-ghxb6 ingress-nginx-admission-patch-d9x2b helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6: exit status 1 (103.186454ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-246818/192.168.39.141
	Start Time:       Mon, 07 Oct 2024 11:35:01 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6r7hg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6r7hg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                           Age                   From               Message
	  ----     ------                           ----                  ----               -------
	  Normal   Scheduled                        10m                   default-scheduler  Successfully assigned default/busybox to addons-246818
	  Normal   Pulling                          9m30s (x4 over 10m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed                           9m30s (x4 over 10m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed                           9m30s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed                           9m (x6 over 10m)      kubelet            Error: ImagePullBackOff
	  Normal   BackOff                          5m41s (x20 over 10m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  FailedToRetrieveImagePullSecret  41s (x10 over 2m40s)  kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.
	
	
	Name:             hello-world-app-55bf9c44b4-69v2g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-246818/192.168.39.141
	Start Time:       Mon, 07 Oct 2024 11:45:51 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-khkjd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-khkjd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-69v2g to addons-246818
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-246818/192.168.39.141
	Start Time:       Mon, 07 Oct 2024 11:43:36 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fs7ff (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-fs7ff:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m18s                default-scheduler  Successfully assigned default/task-pv-pod to addons-246818
	  Warning  Failed     107s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     45s (x2 over 107s)   kubelet            Error: ErrImagePull
	  Warning  Failed     45s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    30s (x2 over 106s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     30s (x2 over 106s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    19s (x3 over 2m17s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42qhr (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-42qhr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ghxb6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d9x2b" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-246818 describe pod busybox hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path ingress-nginx-admission-create-ghxb6 ingress-nginx-admission-patch-d9x2b helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-246818 addons disable ingress-dns --alsologtostderr -v=1: (1.210321103s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable ingress --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-246818 addons disable ingress --alsologtostderr -v=1: (7.758155472s)
--- FAIL: TestAddons/parallel/Ingress (152.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (334.06s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.231834ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-q6j6p" [f37e3b43-4ce4-4879-babb-e6efdf0f3163] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005214452s
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (73.088689ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 11m8.558554957s

                                                
                                                
** /stderr **
I1007 11:43:29.561037  384271 retry.go:31] will retry after 3.674514779s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (72.237723ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 11m12.305609758s

                                                
                                                
** /stderr **
I1007 11:43:33.308289  384271 retry.go:31] will retry after 3.782483355s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (81.248115ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 11m16.170808069s

                                                
                                                
** /stderr **
I1007 11:43:37.173264  384271 retry.go:31] will retry after 9.065657484s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (69.239612ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 11m25.306805941s

                                                
                                                
** /stderr **
I1007 11:43:46.309381  384271 retry.go:31] will retry after 7.411882257s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (70.335818ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 11m32.789757474s

                                                
                                                
** /stderr **
I1007 11:43:53.792075  384271 retry.go:31] will retry after 15.669180361s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (67.819642ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 11m48.527045049s

                                                
                                                
** /stderr **
I1007 11:44:09.529787  384271 retry.go:31] will retry after 17.050400368s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (66.834135ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 12m5.645565607s

                                                
                                                
** /stderr **
I1007 11:44:26.647984  384271 retry.go:31] will retry after 22.14355504s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (68.464898ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 12m27.858255185s

                                                
                                                
** /stderr **
I1007 11:44:48.860988  384271 retry.go:31] will retry after 1m7.690110971s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (86.861147ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 13m35.63535922s

                                                
                                                
** /stderr **
I1007 11:45:56.638392  384271 retry.go:31] will retry after 1m5.483158317s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (68.819594ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 14m41.188604934s

                                                
                                                
** /stderr **
I1007 11:47:02.191403  384271 retry.go:31] will retry after 1m0.911264479s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (67.185702ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 15m42.167838878s

                                                
                                                
** /stderr **
I1007 11:48:03.170398  384271 retry.go:31] will retry after 52.307187323s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-246818 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-246818 top pods -n kube-system: exit status 1 (67.2462ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-9n6rn, age: 16m34.542836633s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-246818 -n addons-246818
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-246818 logs -n 25: (1.368330435s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-257663 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | -p download-only-257663              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| delete  | -p download-only-257663              | download-only-257663 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| delete  | -p download-only-243020              | download-only-243020 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| delete  | -p download-only-257663              | download-only-257663 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| start   | --download-only -p                   | binary-mirror-827339 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | binary-mirror-827339                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38787               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-827339              | binary-mirror-827339 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| addons  | enable dashboard -p                  | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | addons-246818                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | addons-246818                        |                      |         |         |                     |                     |
	| start   | -p addons-246818 --wait=true         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:34 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:34 UTC | 07 Oct 24 11:34 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | -p addons-246818                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | -p addons-246818                     |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| ip      | addons-246818 ip                     | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-246818 addons                 | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ssh     | addons-246818 ssh curl -s            | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| ip      | addons-246818 ip                     | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:45 UTC |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:45 UTC |
	|         | ingress-dns --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:46 UTC |
	|         | ingress --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-246818 addons                 | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:48 UTC |                     |
	|         | storage-provisioner-rancher          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:31:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:31:34.116156  384891 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:31:34.116270  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:31:34.116277  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:31:34.116282  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:31:34.116469  384891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 11:31:34.117144  384891 out.go:352] Setting JSON to false
	I1007 11:31:34.118102  384891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4440,"bootTime":1728296254,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:31:34.118176  384891 start.go:139] virtualization: kvm guest
	I1007 11:31:34.120408  384891 out.go:177] * [addons-246818] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:31:34.122258  384891 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 11:31:34.122285  384891 notify.go:220] Checking for updates...
	I1007 11:31:34.124959  384891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:31:34.126627  384891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:31:34.128213  384891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:31:34.129872  384891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:31:34.131237  384891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:31:34.132940  384891 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:31:34.166945  384891 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 11:31:34.168406  384891 start.go:297] selected driver: kvm2
	I1007 11:31:34.168430  384891 start.go:901] validating driver "kvm2" against <nil>
	I1007 11:31:34.168446  384891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:31:34.169281  384891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:31:34.169397  384891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:31:34.186640  384891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:31:34.186710  384891 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:31:34.186981  384891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:31:34.187031  384891 cni.go:84] Creating CNI manager for ""
	I1007 11:31:34.187088  384891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:31:34.187116  384891 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 11:31:34.187194  384891 start.go:340] cluster config:
	{Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:31:34.187319  384891 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:31:34.189414  384891 out.go:177] * Starting "addons-246818" primary control-plane node in "addons-246818" cluster
	I1007 11:31:34.191135  384891 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:31:34.191199  384891 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 11:31:34.191215  384891 cache.go:56] Caching tarball of preloaded images
	I1007 11:31:34.191343  384891 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 11:31:34.191358  384891 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:31:34.191753  384891 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/config.json ...
	I1007 11:31:34.191788  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/config.json: {Name:mk8ac1a8a8e3adadfd093d5da0627d5b3cabf0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:31:34.191973  384891 start.go:360] acquireMachinesLock for addons-246818: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 11:31:34.192039  384891 start.go:364] duration metric: took 47.555µs to acquireMachinesLock for "addons-246818"
	I1007 11:31:34.192065  384891 start.go:93] Provisioning new machine with config: &{Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:31:34.192185  384891 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 11:31:34.194346  384891 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1007 11:31:34.194555  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:31:34.194629  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:31:34.210789  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I1007 11:31:34.211351  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:31:34.211942  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:31:34.211966  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:31:34.212395  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:31:34.212604  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:34.212831  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:34.213029  384891 start.go:159] libmachine.API.Create for "addons-246818" (driver="kvm2")
	I1007 11:31:34.213068  384891 client.go:168] LocalClient.Create starting
	I1007 11:31:34.213129  384891 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 11:31:34.455639  384891 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 11:31:34.570226  384891 main.go:141] libmachine: Running pre-create checks...
	I1007 11:31:34.570260  384891 main.go:141] libmachine: (addons-246818) Calling .PreCreateCheck
	I1007 11:31:34.570842  384891 main.go:141] libmachine: (addons-246818) Calling .GetConfigRaw
	I1007 11:31:34.571323  384891 main.go:141] libmachine: Creating machine...
	I1007 11:31:34.571338  384891 main.go:141] libmachine: (addons-246818) Calling .Create
	I1007 11:31:34.571502  384891 main.go:141] libmachine: (addons-246818) Creating KVM machine...
	I1007 11:31:34.572696  384891 main.go:141] libmachine: (addons-246818) DBG | found existing default KVM network
	I1007 11:31:34.573525  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:34.573329  384913 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000115200}
	I1007 11:31:34.573556  384891 main.go:141] libmachine: (addons-246818) DBG | created network xml: 
	I1007 11:31:34.573571  384891 main.go:141] libmachine: (addons-246818) DBG | <network>
	I1007 11:31:34.573580  384891 main.go:141] libmachine: (addons-246818) DBG |   <name>mk-addons-246818</name>
	I1007 11:31:34.573590  384891 main.go:141] libmachine: (addons-246818) DBG |   <dns enable='no'/>
	I1007 11:31:34.573600  384891 main.go:141] libmachine: (addons-246818) DBG |   
	I1007 11:31:34.573610  384891 main.go:141] libmachine: (addons-246818) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 11:31:34.573622  384891 main.go:141] libmachine: (addons-246818) DBG |     <dhcp>
	I1007 11:31:34.573632  384891 main.go:141] libmachine: (addons-246818) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 11:31:34.573640  384891 main.go:141] libmachine: (addons-246818) DBG |     </dhcp>
	I1007 11:31:34.573647  384891 main.go:141] libmachine: (addons-246818) DBG |   </ip>
	I1007 11:31:34.573659  384891 main.go:141] libmachine: (addons-246818) DBG |   
	I1007 11:31:34.573670  384891 main.go:141] libmachine: (addons-246818) DBG | </network>
	I1007 11:31:34.573677  384891 main.go:141] libmachine: (addons-246818) DBG | 
	I1007 11:31:34.579638  384891 main.go:141] libmachine: (addons-246818) DBG | trying to create private KVM network mk-addons-246818 192.168.39.0/24...
	I1007 11:31:34.649044  384891 main.go:141] libmachine: (addons-246818) DBG | private KVM network mk-addons-246818 192.168.39.0/24 created
	I1007 11:31:34.649094  384891 main.go:141] libmachine: (addons-246818) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818 ...
	I1007 11:31:34.649118  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:34.648912  384913 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:31:34.649140  384891 main.go:141] libmachine: (addons-246818) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 11:31:34.649156  384891 main.go:141] libmachine: (addons-246818) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 11:31:34.924379  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:34.924203  384913 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa...
	I1007 11:31:35.127437  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:35.127261  384913 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/addons-246818.rawdisk...
	I1007 11:31:35.127475  384891 main.go:141] libmachine: (addons-246818) DBG | Writing magic tar header
	I1007 11:31:35.127490  384891 main.go:141] libmachine: (addons-246818) DBG | Writing SSH key tar header
	I1007 11:31:35.127501  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:35.127388  384913 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818 ...
	I1007 11:31:35.127525  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818
	I1007 11:31:35.127537  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 11:31:35.127548  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818 (perms=drwx------)
	I1007 11:31:35.127558  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 11:31:35.127564  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 11:31:35.127603  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:31:35.127639  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 11:31:35.127648  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 11:31:35.127657  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 11:31:35.127665  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins
	I1007 11:31:35.127678  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home
	I1007 11:31:35.127691  384891 main.go:141] libmachine: (addons-246818) DBG | Skipping /home - not owner
	I1007 11:31:35.127708  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 11:31:35.127726  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 11:31:35.127740  384891 main.go:141] libmachine: (addons-246818) Creating domain...
	I1007 11:31:35.128819  384891 main.go:141] libmachine: (addons-246818) define libvirt domain using xml: 
	I1007 11:31:35.128847  384891 main.go:141] libmachine: (addons-246818) <domain type='kvm'>
	I1007 11:31:35.128859  384891 main.go:141] libmachine: (addons-246818)   <name>addons-246818</name>
	I1007 11:31:35.128867  384891 main.go:141] libmachine: (addons-246818)   <memory unit='MiB'>4000</memory>
	I1007 11:31:35.128910  384891 main.go:141] libmachine: (addons-246818)   <vcpu>2</vcpu>
	I1007 11:31:35.128933  384891 main.go:141] libmachine: (addons-246818)   <features>
	I1007 11:31:35.128941  384891 main.go:141] libmachine: (addons-246818)     <acpi/>
	I1007 11:31:35.128948  384891 main.go:141] libmachine: (addons-246818)     <apic/>
	I1007 11:31:35.128969  384891 main.go:141] libmachine: (addons-246818)     <pae/>
	I1007 11:31:35.128980  384891 main.go:141] libmachine: (addons-246818)     
	I1007 11:31:35.128988  384891 main.go:141] libmachine: (addons-246818)   </features>
	I1007 11:31:35.128998  384891 main.go:141] libmachine: (addons-246818)   <cpu mode='host-passthrough'>
	I1007 11:31:35.129006  384891 main.go:141] libmachine: (addons-246818)   
	I1007 11:31:35.129016  384891 main.go:141] libmachine: (addons-246818)   </cpu>
	I1007 11:31:35.129046  384891 main.go:141] libmachine: (addons-246818)   <os>
	I1007 11:31:35.129077  384891 main.go:141] libmachine: (addons-246818)     <type>hvm</type>
	I1007 11:31:35.129084  384891 main.go:141] libmachine: (addons-246818)     <boot dev='cdrom'/>
	I1007 11:31:35.129095  384891 main.go:141] libmachine: (addons-246818)     <boot dev='hd'/>
	I1007 11:31:35.129107  384891 main.go:141] libmachine: (addons-246818)     <bootmenu enable='no'/>
	I1007 11:31:35.129117  384891 main.go:141] libmachine: (addons-246818)   </os>
	I1007 11:31:35.129125  384891 main.go:141] libmachine: (addons-246818)   <devices>
	I1007 11:31:35.129140  384891 main.go:141] libmachine: (addons-246818)     <disk type='file' device='cdrom'>
	I1007 11:31:35.129155  384891 main.go:141] libmachine: (addons-246818)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/boot2docker.iso'/>
	I1007 11:31:35.129167  384891 main.go:141] libmachine: (addons-246818)       <target dev='hdc' bus='scsi'/>
	I1007 11:31:35.129174  384891 main.go:141] libmachine: (addons-246818)       <readonly/>
	I1007 11:31:35.129180  384891 main.go:141] libmachine: (addons-246818)     </disk>
	I1007 11:31:35.129194  384891 main.go:141] libmachine: (addons-246818)     <disk type='file' device='disk'>
	I1007 11:31:35.129223  384891 main.go:141] libmachine: (addons-246818)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 11:31:35.129239  384891 main.go:141] libmachine: (addons-246818)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/addons-246818.rawdisk'/>
	I1007 11:31:35.129249  384891 main.go:141] libmachine: (addons-246818)       <target dev='hda' bus='virtio'/>
	I1007 11:31:35.129258  384891 main.go:141] libmachine: (addons-246818)     </disk>
	I1007 11:31:35.129263  384891 main.go:141] libmachine: (addons-246818)     <interface type='network'>
	I1007 11:31:35.129278  384891 main.go:141] libmachine: (addons-246818)       <source network='mk-addons-246818'/>
	I1007 11:31:35.129290  384891 main.go:141] libmachine: (addons-246818)       <model type='virtio'/>
	I1007 11:31:35.129301  384891 main.go:141] libmachine: (addons-246818)     </interface>
	I1007 11:31:35.129312  384891 main.go:141] libmachine: (addons-246818)     <interface type='network'>
	I1007 11:31:35.129322  384891 main.go:141] libmachine: (addons-246818)       <source network='default'/>
	I1007 11:31:35.129335  384891 main.go:141] libmachine: (addons-246818)       <model type='virtio'/>
	I1007 11:31:35.129345  384891 main.go:141] libmachine: (addons-246818)     </interface>
	I1007 11:31:35.129351  384891 main.go:141] libmachine: (addons-246818)     <serial type='pty'>
	I1007 11:31:35.129363  384891 main.go:141] libmachine: (addons-246818)       <target port='0'/>
	I1007 11:31:35.129375  384891 main.go:141] libmachine: (addons-246818)     </serial>
	I1007 11:31:35.129385  384891 main.go:141] libmachine: (addons-246818)     <console type='pty'>
	I1007 11:31:35.129392  384891 main.go:141] libmachine: (addons-246818)       <target type='serial' port='0'/>
	I1007 11:31:35.129398  384891 main.go:141] libmachine: (addons-246818)     </console>
	I1007 11:31:35.129404  384891 main.go:141] libmachine: (addons-246818)     <rng model='virtio'>
	I1007 11:31:35.129410  384891 main.go:141] libmachine: (addons-246818)       <backend model='random'>/dev/random</backend>
	I1007 11:31:35.129416  384891 main.go:141] libmachine: (addons-246818)     </rng>
	I1007 11:31:35.129420  384891 main.go:141] libmachine: (addons-246818)     
	I1007 11:31:35.129426  384891 main.go:141] libmachine: (addons-246818)     
	I1007 11:31:35.129431  384891 main.go:141] libmachine: (addons-246818)   </devices>
	I1007 11:31:35.129437  384891 main.go:141] libmachine: (addons-246818) </domain>
	I1007 11:31:35.129452  384891 main.go:141] libmachine: (addons-246818) 
	I1007 11:31:35.136045  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:59:de:27 in network default
	I1007 11:31:35.136621  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:35.136638  384891 main.go:141] libmachine: (addons-246818) Ensuring networks are active...
	I1007 11:31:35.137397  384891 main.go:141] libmachine: (addons-246818) Ensuring network default is active
	I1007 11:31:35.137759  384891 main.go:141] libmachine: (addons-246818) Ensuring network mk-addons-246818 is active
	I1007 11:31:35.139309  384891 main.go:141] libmachine: (addons-246818) Getting domain xml...
	I1007 11:31:35.140007  384891 main.go:141] libmachine: (addons-246818) Creating domain...
	I1007 11:31:36.562781  384891 main.go:141] libmachine: (addons-246818) Waiting to get IP...
	I1007 11:31:36.563649  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:36.564039  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:36.564102  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:36.564034  384913 retry.go:31] will retry after 196.803567ms: waiting for machine to come up
	I1007 11:31:36.762559  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:36.762980  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:36.763006  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:36.762928  384913 retry.go:31] will retry after 309.609813ms: waiting for machine to come up
	I1007 11:31:37.074568  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:37.075066  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:37.075099  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:37.075019  384913 retry.go:31] will retry after 357.050229ms: waiting for machine to come up
	I1007 11:31:37.433468  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:37.433865  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:37.433888  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:37.433824  384913 retry.go:31] will retry after 404.967007ms: waiting for machine to come up
	I1007 11:31:37.840487  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:37.840912  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:37.840944  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:37.840852  384913 retry.go:31] will retry after 505.430509ms: waiting for machine to come up
	I1007 11:31:38.347450  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:38.347839  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:38.347868  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:38.347768  384913 retry.go:31] will retry after 847.255626ms: waiting for machine to come up
	I1007 11:31:39.196471  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:39.196947  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:39.196980  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:39.196886  384913 retry.go:31] will retry after 920.58458ms: waiting for machine to come up
	I1007 11:31:40.119476  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:40.119814  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:40.119836  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:40.119790  384913 retry.go:31] will retry after 948.651988ms: waiting for machine to come up
	I1007 11:31:41.070215  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:41.070708  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:41.070731  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:41.070668  384913 retry.go:31] will retry after 1.382953489s: waiting for machine to come up
	I1007 11:31:42.455452  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:42.455916  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:42.455941  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:42.455847  384913 retry.go:31] will retry after 2.262578278s: waiting for machine to come up
	I1007 11:31:44.719656  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:44.720338  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:44.720368  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:44.720277  384913 retry.go:31] will retry after 2.289996901s: waiting for machine to come up
	I1007 11:31:47.012350  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:47.012859  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:47.012889  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:47.012809  384913 retry.go:31] will retry after 3.343133276s: waiting for machine to come up
	I1007 11:31:50.358204  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:50.358539  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:50.358566  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:50.358487  384913 retry.go:31] will retry after 4.335427182s: waiting for machine to come up
	I1007 11:31:54.695193  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:54.695591  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:54.695617  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:54.695544  384913 retry.go:31] will retry after 3.558303483s: waiting for machine to come up
	I1007 11:31:58.258305  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.258838  384891 main.go:141] libmachine: (addons-246818) Found IP for machine: 192.168.39.141
	I1007 11:31:58.258873  384891 main.go:141] libmachine: (addons-246818) Reserving static IP address...
	I1007 11:31:58.258887  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has current primary IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.259281  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find host DHCP lease matching {name: "addons-246818", mac: "52:54:00:b1:d7:db", ip: "192.168.39.141"} in network mk-addons-246818
	I1007 11:31:58.385299  384891 main.go:141] libmachine: (addons-246818) Reserved static IP address: 192.168.39.141
	I1007 11:31:58.385331  384891 main.go:141] libmachine: (addons-246818) DBG | Getting to WaitForSSH function...
	I1007 11:31:58.385340  384891 main.go:141] libmachine: (addons-246818) Waiting for SSH to be available...
	I1007 11:31:58.387663  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.388108  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.388140  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.388409  384891 main.go:141] libmachine: (addons-246818) DBG | Using SSH client type: external
	I1007 11:31:58.388428  384891 main.go:141] libmachine: (addons-246818) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa (-rw-------)
	I1007 11:31:58.388460  384891 main.go:141] libmachine: (addons-246818) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 11:31:58.388472  384891 main.go:141] libmachine: (addons-246818) DBG | About to run SSH command:
	I1007 11:31:58.388485  384891 main.go:141] libmachine: (addons-246818) DBG | exit 0
	I1007 11:31:58.523637  384891 main.go:141] libmachine: (addons-246818) DBG | SSH cmd err, output: <nil>: 
	I1007 11:31:58.523957  384891 main.go:141] libmachine: (addons-246818) KVM machine creation complete!
	I1007 11:31:58.524322  384891 main.go:141] libmachine: (addons-246818) Calling .GetConfigRaw
	I1007 11:31:58.524995  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:58.525265  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:58.525453  384891 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 11:31:58.525471  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:31:58.526983  384891 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 11:31:58.527001  384891 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 11:31:58.527007  384891 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 11:31:58.527013  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.529966  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.530364  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.530392  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.530622  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.530830  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.531010  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.531238  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.531430  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.531658  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.531672  384891 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 11:31:58.638640  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:31:58.638671  384891 main.go:141] libmachine: Detecting the provisioner...
	I1007 11:31:58.638699  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.641499  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.641868  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.641902  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.642074  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.642323  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.642499  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.642641  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.642833  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.643029  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.643040  384891 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 11:31:58.752146  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 11:31:58.752213  384891 main.go:141] libmachine: found compatible host: buildroot
	I1007 11:31:58.752223  384891 main.go:141] libmachine: Provisioning with buildroot...
	I1007 11:31:58.752233  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:58.752488  384891 buildroot.go:166] provisioning hostname "addons-246818"
	I1007 11:31:58.752521  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:58.752755  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.755321  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.755658  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.755689  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.755781  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.755930  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.756116  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.756273  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.756441  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.756677  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.756693  384891 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-246818 && echo "addons-246818" | sudo tee /etc/hostname
	I1007 11:31:58.878487  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-246818
	
	I1007 11:31:58.878522  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.881235  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.881595  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.881628  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.881829  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.882043  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.882221  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.882373  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.882547  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.882736  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.882751  384891 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-246818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-246818/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-246818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:31:59.000758  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:31:59.000793  384891 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 11:31:59.000860  384891 buildroot.go:174] setting up certificates
	I1007 11:31:59.000882  384891 provision.go:84] configureAuth start
	I1007 11:31:59.000901  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:59.001290  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:31:59.004173  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.004729  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.004770  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.005018  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.007634  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.007984  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.008012  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.008236  384891 provision.go:143] copyHostCerts
	I1007 11:31:59.008313  384891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 11:31:59.008444  384891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 11:31:59.008531  384891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 11:31:59.008592  384891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.addons-246818 san=[127.0.0.1 192.168.39.141 addons-246818 localhost minikube]
	I1007 11:31:59.251829  384891 provision.go:177] copyRemoteCerts
	I1007 11:31:59.251901  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:31:59.251926  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.255073  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.255515  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.255554  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.255695  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.255927  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.256090  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.256229  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.342524  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 11:31:59.367975  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 11:31:59.393410  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 11:31:59.418593  384891 provision.go:87] duration metric: took 417.693053ms to configureAuth
	I1007 11:31:59.418624  384891 buildroot.go:189] setting minikube options for container-runtime
	I1007 11:31:59.418838  384891 config.go:182] Loaded profile config "addons-246818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:31:59.418935  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.421597  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.421932  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.421960  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.422111  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.422335  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.422530  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.422645  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.422799  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:59.423008  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:59.423028  384891 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:31:59.655212  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:31:59.655259  384891 main.go:141] libmachine: Checking connection to Docker...
	I1007 11:31:59.655271  384891 main.go:141] libmachine: (addons-246818) Calling .GetURL
	I1007 11:31:59.656909  384891 main.go:141] libmachine: (addons-246818) DBG | Using libvirt version 6000000
	I1007 11:31:59.659411  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.659775  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.659810  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.659963  384891 main.go:141] libmachine: Docker is up and running!
	I1007 11:31:59.659972  384891 main.go:141] libmachine: Reticulating splines...
	I1007 11:31:59.659979  384891 client.go:171] duration metric: took 25.446899659s to LocalClient.Create
	I1007 11:31:59.660003  384891 start.go:167] duration metric: took 25.446975437s to libmachine.API.Create "addons-246818"
	I1007 11:31:59.660014  384891 start.go:293] postStartSetup for "addons-246818" (driver="kvm2")
	I1007 11:31:59.660024  384891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:31:59.660041  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.660313  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:31:59.660341  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.662645  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.663064  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.663113  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.663225  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.663412  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.663549  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.663695  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.746681  384891 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:31:59.750995  384891 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 11:31:59.751029  384891 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 11:31:59.751132  384891 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 11:31:59.751171  384891 start.go:296] duration metric: took 91.150102ms for postStartSetup
	I1007 11:31:59.751218  384891 main.go:141] libmachine: (addons-246818) Calling .GetConfigRaw
	I1007 11:31:59.751830  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:31:59.754353  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.754726  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.754752  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.754998  384891 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/config.json ...
	I1007 11:31:59.755218  384891 start.go:128] duration metric: took 25.563019291s to createHost
	I1007 11:31:59.755244  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.757372  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.757682  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.757708  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.757833  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.757994  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.758133  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.758316  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.758481  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:59.758651  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:59.758660  384891 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 11:31:59.868422  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728300719.835078686
	
	I1007 11:31:59.868449  384891 fix.go:216] guest clock: 1728300719.835078686
	I1007 11:31:59.868459  384891 fix.go:229] Guest: 2024-10-07 11:31:59.835078686 +0000 UTC Remote: 2024-10-07 11:31:59.755232069 +0000 UTC m=+25.679693573 (delta=79.846617ms)
	I1007 11:31:59.868533  384891 fix.go:200] guest clock delta is within tolerance: 79.846617ms
	I1007 11:31:59.868543  384891 start.go:83] releasing machines lock for "addons-246818", held for 25.676492095s
	I1007 11:31:59.868570  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.868898  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:31:59.871581  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.871955  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.871981  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.872222  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.872811  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.872983  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.873091  384891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:31:59.873149  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.873159  384891 ssh_runner.go:195] Run: cat /version.json
	I1007 11:31:59.873181  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.875672  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.875703  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.876005  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.876042  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.876063  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.876076  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.876200  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.876338  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.876412  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.876507  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.876572  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.876743  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.876780  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.876890  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.978691  384891 ssh_runner.go:195] Run: systemctl --version
	I1007 11:31:59.985018  384891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:32:00.152322  384891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 11:32:00.158492  384891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 11:32:00.158593  384891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:32:00.176990  384891 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 11:32:00.177022  384891 start.go:495] detecting cgroup driver to use...
	I1007 11:32:00.177109  384891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:32:00.195687  384891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:32:00.211978  384891 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:32:00.212058  384891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:32:00.227604  384891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:32:00.242144  384891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:32:00.366315  384891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:32:00.526683  384891 docker.go:233] disabling docker service ...
	I1007 11:32:00.526776  384891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:32:00.541214  384891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:32:00.554981  384891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:32:00.685283  384891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:32:00.806166  384891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:32:00.821760  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:32:00.840995  384891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:32:00.841077  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.852364  384891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:32:00.852452  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.863984  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.875862  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.887376  384891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:32:00.899170  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.910698  384891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.928710  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.939899  384891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:32:00.950399  384891 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 11:32:00.950497  384891 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 11:32:00.964507  384891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:32:00.975096  384891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:32:01.103400  384891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:32:01.206446  384891 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:32:01.206551  384891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:32:01.212082  384891 start.go:563] Will wait 60s for crictl version
	I1007 11:32:01.212179  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:32:01.216568  384891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:32:01.255513  384891 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 11:32:01.255616  384891 ssh_runner.go:195] Run: crio --version
	I1007 11:32:01.285883  384891 ssh_runner.go:195] Run: crio --version
	I1007 11:32:01.318274  384891 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 11:32:01.319603  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:32:01.322312  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:01.322607  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:01.322642  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:01.322882  384891 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 11:32:01.328032  384891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:32:01.342592  384891 kubeadm.go:883] updating cluster {Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:32:01.342753  384891 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:32:01.342813  384891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:32:01.385519  384891 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 11:32:01.385605  384891 ssh_runner.go:195] Run: which lz4
	I1007 11:32:01.389912  384891 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 11:32:01.394513  384891 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 11:32:01.394572  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 11:32:02.800302  384891 crio.go:462] duration metric: took 1.410419336s to copy over tarball
	I1007 11:32:02.800451  384891 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 11:32:04.995474  384891 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.194982184s)
	I1007 11:32:04.995507  384891 crio.go:469] duration metric: took 2.195153422s to extract the tarball
	I1007 11:32:04.995518  384891 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 11:32:05.034133  384891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:32:05.081714  384891 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:32:05.081748  384891 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:32:05.081759  384891 kubeadm.go:934] updating node { 192.168.39.141 8443 v1.31.1 crio true true} ...
	I1007 11:32:05.081919  384891 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-246818 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:32:05.082006  384891 ssh_runner.go:195] Run: crio config
	I1007 11:32:05.126986  384891 cni.go:84] Creating CNI manager for ""
	I1007 11:32:05.127017  384891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:32:05.127029  384891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:32:05.127055  384891 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-246818 NodeName:addons-246818 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:32:05.127205  384891 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-246818"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:32:05.127271  384891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:32:05.138343  384891 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:32:05.138419  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:32:05.148540  384891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 11:32:05.166067  384891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:32:05.184173  384891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1007 11:32:05.202127  384891 ssh_runner.go:195] Run: grep 192.168.39.141	control-plane.minikube.internal$ /etc/hosts
	I1007 11:32:05.206447  384891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:32:05.219733  384891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:32:05.356364  384891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:32:05.374398  384891 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818 for IP: 192.168.39.141
	I1007 11:32:05.374431  384891 certs.go:194] generating shared ca certs ...
	I1007 11:32:05.374455  384891 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.374717  384891 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 11:32:05.569743  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt ...
	I1007 11:32:05.569780  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt: {Name:mka635174f873364a1d996695969f11525f0aad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.570000  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key ...
	I1007 11:32:05.570016  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key: {Name:mkb9f08978b906a4a69bf40b3648846639990aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.570120  384891 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 11:32:05.641034  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt ...
	I1007 11:32:05.641069  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt: {Name:mk6c2e0cb0b3463b53d4a7b8eca27330e83cad52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.641265  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key ...
	I1007 11:32:05.641279  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key: {Name:mkbd00d408f92ed97628a06bd31d4a22a55f1116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.641384  384891 certs.go:256] generating profile certs ...
	I1007 11:32:05.641459  384891 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.key
	I1007 11:32:05.641475  384891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt with IP's: []
	I1007 11:32:05.718596  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt ...
	I1007 11:32:05.718631  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: {Name:mk54791d72c1dd37de668acfdf6ae9b6a18b6816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.718824  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.key ...
	I1007 11:32:05.718838  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.key: {Name:mkc39919855b7ef97968b46dce56ec908abc99e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.718952  384891 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102
	I1007 11:32:05.719011  384891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141]
	I1007 11:32:05.819688  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102 ...
	I1007 11:32:05.819722  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102: {Name:mkfaee04775ee1012712d288fadcabaf991b49f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.819920  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102 ...
	I1007 11:32:05.819938  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102: {Name:mkeee88413f174c6e33cb018157316e66b4b0927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.820040  384891 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt
	I1007 11:32:05.820118  384891 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key
	I1007 11:32:05.820163  384891 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key
	I1007 11:32:05.820181  384891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt with IP's: []
	I1007 11:32:05.968555  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt ...
	I1007 11:32:05.968602  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt: {Name:mk5df33635e69d6716681ea740275cc204f34bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.968800  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key ...
	I1007 11:32:05.968815  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key: {Name:mkf7d084582e160837c9ab4efc5b7bae6d92e36f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.969012  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 11:32:05.969068  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 11:32:05.969100  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:32:05.969125  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 11:32:05.969737  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:32:05.995982  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 11:32:06.021458  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:32:06.050024  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 11:32:06.079964  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 11:32:06.108572  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 11:32:06.135463  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:32:06.162035  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 11:32:06.186675  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:32:06.216268  384891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:32:06.234408  384891 ssh_runner.go:195] Run: openssl version
	I1007 11:32:06.240683  384891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:32:06.252555  384891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:32:06.257813  384891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:32:06.257897  384891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:32:06.264471  384891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:32:06.276095  384891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:32:06.280492  384891 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 11:32:06.280573  384891 kubeadm.go:392] StartCluster: {Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:32:06.280683  384891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:32:06.280788  384891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:32:06.325293  384891 cri.go:89] found id: ""
	I1007 11:32:06.325397  384891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 11:32:06.338096  384891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 11:32:06.348756  384891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 11:32:06.359237  384891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 11:32:06.359265  384891 kubeadm.go:157] found existing configuration files:
	
	I1007 11:32:06.359321  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 11:32:06.369410  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 11:32:06.369502  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 11:32:06.380168  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 11:32:06.390519  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 11:32:06.390589  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 11:32:06.401125  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 11:32:06.411429  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 11:32:06.411496  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 11:32:06.422449  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 11:32:06.432934  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 11:32:06.433018  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 11:32:06.444113  384891 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 11:32:06.499524  384891 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 11:32:06.499599  384891 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 11:32:06.604372  384891 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 11:32:06.604511  384891 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 11:32:06.604590  384891 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 11:32:06.621867  384891 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 11:32:06.753861  384891 out.go:235]   - Generating certificates and keys ...
	I1007 11:32:06.753997  384891 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 11:32:06.754108  384891 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 11:32:06.754241  384891 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 11:32:06.907525  384891 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 11:32:07.081367  384891 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 11:32:07.235517  384891 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 11:32:07.323576  384891 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 11:32:07.323734  384891 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-246818 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1007 11:32:07.484355  384891 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 11:32:07.484552  384891 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-246818 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1007 11:32:07.690609  384891 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 11:32:07.921485  384891 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 11:32:08.090512  384891 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 11:32:08.090799  384891 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 11:32:08.402148  384891 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 11:32:08.478195  384891 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 11:32:08.612503  384891 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 11:32:08.702731  384891 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 11:32:09.158663  384891 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 11:32:09.159440  384891 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 11:32:09.161819  384891 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 11:32:09.167042  384891 out.go:235]   - Booting up control plane ...
	I1007 11:32:09.167167  384891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 11:32:09.167249  384891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 11:32:09.167364  384891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 11:32:09.179881  384891 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 11:32:09.189965  384891 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 11:32:09.190035  384891 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 11:32:09.324400  384891 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 11:32:09.324529  384891 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 11:32:09.831332  384891 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.899298ms
	I1007 11:32:09.831474  384891 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 11:32:15.831159  384891 kubeadm.go:310] [api-check] The API server is healthy after 6.001731023s
	I1007 11:32:15.856870  384891 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 11:32:15.879662  384891 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 11:32:15.920548  384891 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 11:32:15.920789  384891 kubeadm.go:310] [mark-control-plane] Marking the node addons-246818 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 11:32:15.939440  384891 kubeadm.go:310] [bootstrap-token] Using token: bpaf5t.csjf2xhv6gacp46a
	I1007 11:32:15.940908  384891 out.go:235]   - Configuring RBAC rules ...
	I1007 11:32:15.941047  384891 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 11:32:15.948031  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 11:32:15.960728  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 11:32:15.964750  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 11:32:15.968808  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 11:32:15.973958  384891 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 11:32:16.238653  384891 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 11:32:16.679433  384891 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 11:32:17.237909  384891 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 11:32:17.237938  384891 kubeadm.go:310] 
	I1007 11:32:17.238007  384891 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 11:32:17.238014  384891 kubeadm.go:310] 
	I1007 11:32:17.238117  384891 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 11:32:17.238128  384891 kubeadm.go:310] 
	I1007 11:32:17.238155  384891 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 11:32:17.238231  384891 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 11:32:17.238300  384891 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 11:32:17.238310  384891 kubeadm.go:310] 
	I1007 11:32:17.238377  384891 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 11:32:17.238388  384891 kubeadm.go:310] 
	I1007 11:32:17.238446  384891 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 11:32:17.238488  384891 kubeadm.go:310] 
	I1007 11:32:17.238579  384891 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 11:32:17.238753  384891 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 11:32:17.238851  384891 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 11:32:17.238863  384891 kubeadm.go:310] 
	I1007 11:32:17.238995  384891 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 11:32:17.239104  384891 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 11:32:17.239114  384891 kubeadm.go:310] 
	I1007 11:32:17.239246  384891 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bpaf5t.csjf2xhv6gacp46a \
	I1007 11:32:17.239371  384891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 \
	I1007 11:32:17.239410  384891 kubeadm.go:310] 	--control-plane 
	I1007 11:32:17.239423  384891 kubeadm.go:310] 
	I1007 11:32:17.239519  384891 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 11:32:17.239531  384891 kubeadm.go:310] 
	I1007 11:32:17.239632  384891 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bpaf5t.csjf2xhv6gacp46a \
	I1007 11:32:17.239752  384891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 
	I1007 11:32:17.240386  384891 kubeadm.go:310] W1007 11:32:06.469101     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:32:17.240693  384891 kubeadm.go:310] W1007 11:32:06.469905     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:32:17.240786  384891 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 11:32:17.240815  384891 cni.go:84] Creating CNI manager for ""
	I1007 11:32:17.240824  384891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:32:17.242992  384891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 11:32:17.244570  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 11:32:17.255322  384891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 11:32:17.274225  384891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 11:32:17.274381  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:17.274395  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-246818 minikube.k8s.io/updated_at=2024_10_07T11_32_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=addons-246818 minikube.k8s.io/primary=true
	I1007 11:32:17.305991  384891 ops.go:34] apiserver oom_adj: -16
	I1007 11:32:17.433612  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:17.933706  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:18.434006  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:18.934513  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:19.434172  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:19.933925  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:20.434498  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:20.934340  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:21.035626  384891 kubeadm.go:1113] duration metric: took 3.76133711s to wait for elevateKubeSystemPrivileges
	I1007 11:32:21.035692  384891 kubeadm.go:394] duration metric: took 14.755128051s to StartCluster
	I1007 11:32:21.035722  384891 settings.go:142] acquiring lock: {Name:mk1ff033f29b570679652ae5ee30e0799b0658dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:21.035877  384891 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:32:21.036315  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:21.036557  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 11:32:21.036565  384891 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:32:21.036649  384891 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 11:32:21.036807  384891 addons.go:69] Setting storage-provisioner=true in profile "addons-246818"
	I1007 11:32:21.036827  384891 addons.go:69] Setting gcp-auth=true in profile "addons-246818"
	I1007 11:32:21.036828  384891 addons.go:69] Setting volcano=true in profile "addons-246818"
	I1007 11:32:21.036807  384891 addons.go:69] Setting inspektor-gadget=true in profile "addons-246818"
	I1007 11:32:21.036852  384891 addons.go:234] Setting addon inspektor-gadget=true in "addons-246818"
	I1007 11:32:21.036853  384891 addons.go:234] Setting addon volcano=true in "addons-246818"
	I1007 11:32:21.036849  384891 addons.go:69] Setting default-storageclass=true in profile "addons-246818"
	I1007 11:32:21.036869  384891 addons.go:69] Setting ingress-dns=true in profile "addons-246818"
	I1007 11:32:21.036879  384891 addons.go:234] Setting addon ingress-dns=true in "addons-246818"
	I1007 11:32:21.036892  384891 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-246818"
	I1007 11:32:21.036910  384891 addons.go:69] Setting metrics-server=true in profile "addons-246818"
	I1007 11:32:21.036924  384891 addons.go:69] Setting registry=true in profile "addons-246818"
	I1007 11:32:21.036927  384891 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-246818"
	I1007 11:32:21.036936  384891 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-246818"
	I1007 11:32:21.036940  384891 addons.go:69] Setting cloud-spanner=true in profile "addons-246818"
	I1007 11:32:21.036952  384891 addons.go:234] Setting addon cloud-spanner=true in "addons-246818"
	I1007 11:32:21.036961  384891 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-246818"
	I1007 11:32:21.036975  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036978  384891 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-246818"
	I1007 11:32:21.036993  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036861  384891 addons.go:69] Setting ingress=true in profile "addons-246818"
	I1007 11:32:21.037030  384891 addons.go:234] Setting addon ingress=true in "addons-246818"
	I1007 11:32:21.037061  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036928  384891 addons.go:234] Setting addon metrics-server=true in "addons-246818"
	I1007 11:32:21.037120  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.037350  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037366  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.036838  384891 addons.go:234] Setting addon storage-provisioner=true in "addons-246818"
	I1007 11:32:21.037391  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037400  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036999  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.037497  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037522  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037549  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037552  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037582  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037557  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037628  384891 addons.go:69] Setting yakd=true in profile "addons-246818"
	I1007 11:32:21.037646  384891 addons.go:234] Setting addon yakd=true in "addons-246818"
	I1007 11:32:21.037680  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036940  384891 addons.go:234] Setting addon registry=true in "addons-246818"
	I1007 11:32:21.037693  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037718  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.037722  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037828  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.036910  384891 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-246818"
	I1007 11:32:21.037863  384891 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-246818"
	I1007 11:32:21.037867  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037869  384891 config.go:182] Loaded profile config "addons-246818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:32:21.036900  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.038071  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.038102  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.036900  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036915  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.038396  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.038456  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.036853  384891 mustload.go:65] Loading cluster: addons-246818
	I1007 11:32:21.037607  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.036926  384891 addons.go:69] Setting volumesnapshots=true in profile "addons-246818"
	I1007 11:32:21.038612  384891 addons.go:234] Setting addon volumesnapshots=true in "addons-246818"
	I1007 11:32:21.038845  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.038991  384891 config.go:182] Loaded profile config "addons-246818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:32:21.039002  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.039392  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.039450  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.038918  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.039508  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.038917  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.039622  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.038947  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.038892  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.040135  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.043624  384891 out.go:177] * Verifying Kubernetes components...
	I1007 11:32:21.045277  384891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:32:21.059674  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40459
	I1007 11:32:21.059886  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34353
	I1007 11:32:21.060116  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I1007 11:32:21.060236  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34293
	I1007 11:32:21.060237  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.060363  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.060626  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.060914  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.060941  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.061120  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.061149  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.061246  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.061270  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.061308  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.061479  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.061589  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.061687  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.061936  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I1007 11:32:21.062180  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.062193  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.062201  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.062216  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.062230  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.062656  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.062682  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.062857  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.063038  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.079607  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.079643  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.079880  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.079926  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.080116  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.080148  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.080156  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I1007 11:32:21.080301  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I1007 11:32:21.080981  384891 addons.go:234] Setting addon default-storageclass=true in "addons-246818"
	I1007 11:32:21.081031  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.081396  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.081445  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.081570  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.081657  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.081692  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.082569  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.082591  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.082721  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.082731  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.082825  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.082859  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.083559  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.083625  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.084318  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.084370  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.095528  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I1007 11:32:21.097818  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I1007 11:32:21.098201  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.098902  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.098927  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.099603  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.100289  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.100343  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.100410  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I1007 11:32:21.100514  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42121
	I1007 11:32:21.100846  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.101205  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.101253  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.101833  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.101860  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.101981  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.102007  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.102113  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.102128  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.102370  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.102568  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.102933  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.102979  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.103022  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.103397  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.103433  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.103660  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.103694  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.113877  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I1007 11:32:21.114643  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.115420  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.115457  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.115864  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.116171  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.120249  384891 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-246818"
	I1007 11:32:21.120318  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.120889  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.120968  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.122908  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42529
	I1007 11:32:21.123632  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.123722  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.123949  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.124128  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43957
	I1007 11:32:21.124615  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.125161  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.125181  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.125325  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.125337  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.125531  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36931
	I1007 11:32:21.125965  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.126199  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.126337  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.126554  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.127633  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.128389  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.128408  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.128475  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.129155  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.129312  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I1007 11:32:21.129767  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42759
	I1007 11:32:21.130331  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.130464  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.131079  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.131105  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.131107  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.131163  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.131263  384891 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 11:32:21.131344  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 11:32:21.131653  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.131733  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I1007 11:32:21.131896  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.132323  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.132906  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 11:32:21.132924  384891 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 11:32:21.132947  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.133027  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.133041  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.133528  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.133751  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.134899  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:32:21.135060  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.136912  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.137373  384891 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 11:32:21.138188  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.138641  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.138667  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.139051  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.139278  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.139296  384891 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:32:21.139317  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 11:32:21.139349  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.139409  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.139420  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:32:21.139532  384891 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 11:32:21.140022  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.140246  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.141237  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 11:32:21.141257  384891 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 11:32:21.141282  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.141668  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.141695  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.141761  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1007 11:32:21.142266  384891 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:32:21.142440  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 11:32:21.142466  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.144235  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1007 11:32:21.145460  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.145517  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.145588  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.146385  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.146417  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.146860  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.146879  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.147046  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.147059  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.147114  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.147158  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.147367  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.147399  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.147622  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.147702  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.147719  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.147904  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.147959  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.148109  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.148421  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.148482  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1007 11:32:21.148649  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.148707  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I1007 11:32:21.148836  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.149316  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.149355  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.149633  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.149739  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.149828  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.150158  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.150216  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.150473  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.150757  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.150905  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.150919  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.151003  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:21.151012  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:21.154104  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.154210  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.154235  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.154317  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.154383  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.154396  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.154417  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:21.154428  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.154441  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:21.154447  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:21.154455  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:21.154462  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:21.154491  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.154529  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.154555  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43077
	I1007 11:32:21.154584  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.154625  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.154653  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I1007 11:32:21.154704  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:21.154725  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:21.154732  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:21.154758  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	W1007 11:32:21.154823  384891 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 11:32:21.155361  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.155377  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.155408  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.155410  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.156096  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.156098  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.156159  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.156308  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.156328  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.156406  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44697
	I1007 11:32:21.156880  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.156968  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.157016  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.157057  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.157424  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.157456  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.158097  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.158115  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.158531  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.158741  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.159645  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.161490  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.162042  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.162115  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.163859  384891 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 11:32:21.163880  384891 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 11:32:21.163859  384891 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 11:32:21.165361  384891 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 11:32:21.165385  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 11:32:21.165391  384891 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:32:21.165409  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 11:32:21.165411  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.165429  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.166616  384891 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 11:32:21.167980  384891 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 11:32:21.167999  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 11:32:21.168025  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.170468  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.171175  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.171703  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.171726  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.171772  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.172008  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.172069  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.172087  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.172117  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.172343  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.172387  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.172430  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.172550  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.172611  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.172790  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.172809  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.173186  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.173368  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.173431  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.173849  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.174000  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.178470  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I1007 11:32:21.178919  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.179445  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I1007 11:32:21.179523  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.179546  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.179982  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.180089  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.180539  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.180594  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.180597  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I1007 11:32:21.180610  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.180961  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.181131  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.181387  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.181501  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35233
	I1007 11:32:21.181867  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.181944  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.181962  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.182396  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.182521  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.182535  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.182653  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.182767  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.183119  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.183140  384891 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 11:32:21.183154  384891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 11:32:21.183180  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.183341  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.185163  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.186316  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.187476  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.188077  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.188103  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.188214  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 11:32:21.188299  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.188343  384891 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 11:32:21.188505  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.188541  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I1007 11:32:21.188671  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.188708  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1007 11:32:21.188930  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.188981  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.189347  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.189515  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.189531  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.189865  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 11:32:21.189883  384891 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 11:32:21.189902  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.189865  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.190077  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.190097  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.190187  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.190696  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.190711  384891 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 11:32:21.190734  384891 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 11:32:21.190756  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.191383  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.194537  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.194635  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.195445  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.195483  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.195505  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.195967  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I1007 11:32:21.196198  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.196207  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.196231  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.196419  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.196513  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.196561  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.196559  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.196717  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.196754  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.196824  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.196885  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.197100  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.197145  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.197116  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.197531  384891 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 11:32:21.197717  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.198163  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.198321  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 11:32:21.199810  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.199881  384891 out.go:177]   - Using image docker.io/busybox:stable
	I1007 11:32:21.199889  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 11:32:21.201263  384891 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 11:32:21.202581  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 11:32:21.202672  384891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:32:21.202687  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 11:32:21.202707  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.203143  384891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:32:21.203162  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 11:32:21.203188  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.205432  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 11:32:21.206350  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.206434  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.206694  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.206752  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.206778  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.206783  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.207047  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.207116  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.207206  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.207253  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.207304  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.207347  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.207390  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.207667  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.208112  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 11:32:21.209535  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	W1007 11:32:21.210345  384891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50694->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.210375  384891 retry.go:31] will retry after 169.209619ms: ssh: handshake failed: read tcp 192.168.39.1:50694->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.212576  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 11:32:21.213890  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 11:32:21.214984  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 11:32:21.215006  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 11:32:21.215033  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.218251  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.218699  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.218755  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.218955  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.219220  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.219366  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.219512  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	W1007 11:32:21.380838  384891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50722->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.380877  384891 retry.go:31] will retry after 486.807101ms: ssh: handshake failed: read tcp 192.168.39.1:50722->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.569888  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:32:21.662408  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:32:21.671323  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 11:32:21.671359  384891 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 11:32:21.677079  384891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 11:32:21.677113  384891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 11:32:21.717464  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 11:32:21.717508  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 11:32:21.721131  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 11:32:21.721162  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 11:32:21.726314  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:32:21.738766  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 11:32:21.751504  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:32:21.781874  384891 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 11:32:21.781907  384891 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 11:32:21.814479  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 11:32:21.824071  384891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:32:21.824369  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 11:32:21.836461  384891 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 11:32:21.836512  384891 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 11:32:21.850533  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 11:32:21.850563  384891 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 11:32:21.901980  384891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 11:32:21.902023  384891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 11:32:21.930371  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 11:32:21.930410  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 11:32:21.939212  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 11:32:21.939255  384891 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 11:32:21.953019  384891 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:32:21.953053  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 11:32:22.048099  384891 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 11:32:22.048134  384891 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 11:32:22.121023  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 11:32:22.121067  384891 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 11:32:22.190982  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:32:22.200335  384891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 11:32:22.200368  384891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 11:32:22.226689  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 11:32:22.226728  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 11:32:22.254471  384891 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 11:32:22.254515  384891 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 11:32:22.284154  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:32:22.284192  384891 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 11:32:22.355775  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:32:22.355802  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 11:32:22.460686  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 11:32:22.460719  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 11:32:22.471081  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 11:32:22.471115  384891 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 11:32:22.474890  384891 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 11:32:22.474914  384891 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 11:32:22.505581  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:32:22.509236  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:32:22.540551  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:32:22.706336  384891 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:32:22.706365  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 11:32:22.757067  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 11:32:22.757099  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 11:32:22.851444  384891 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 11:32:22.851479  384891 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 11:32:22.979312  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:32:23.037624  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 11:32:23.037665  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 11:32:23.181268  384891 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 11:32:23.181304  384891 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 11:32:23.329836  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 11:32:23.329871  384891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 11:32:23.422160  384891 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 11:32:23.422204  384891 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 11:32:23.701377  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 11:32:23.701416  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 11:32:23.717985  384891 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:32:23.718012  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 11:32:23.962990  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 11:32:23.963023  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 11:32:24.062714  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:32:24.267101  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:32:24.267134  384891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 11:32:24.488660  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:32:28.211807  384891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 11:32:28.211865  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:28.215550  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:28.216113  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:28.216153  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:28.216343  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:28.216613  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:28.216834  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:28.217015  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:28.781684  384891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 11:32:29.027350  384891 addons.go:234] Setting addon gcp-auth=true in "addons-246818"
	I1007 11:32:29.027409  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:29.027725  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:29.027785  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:29.045375  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45379
	I1007 11:32:29.046015  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:29.046676  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:29.046709  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:29.047110  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:29.047622  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:29.047675  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:29.064290  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I1007 11:32:29.064871  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:29.065411  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:29.065438  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:29.065798  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:29.066019  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:29.068256  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:29.068576  384891 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 11:32:29.068609  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:29.071318  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:29.071806  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:29.071836  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:29.072091  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:29.072359  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:29.072612  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:29.072814  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:30.065708  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.403252117s)
	I1007 11:32:30.065784  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065796  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065811  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.33946418s)
	I1007 11:32:30.065857  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065865  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.495938324s)
	I1007 11:32:30.065881  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065898  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.327105535s)
	I1007 11:32:30.065926  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065900  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065941  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065947  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065956  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.314410411s)
	I1007 11:32:30.066001  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066014  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066107  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.251596479s)
	I1007 11:32:30.066132  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066140  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066201  384891 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.242099217s)
	I1007 11:32:30.066343  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.066347  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.066368  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.066367  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.066377  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066385  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066443  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.066444  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.066450  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.066458  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066464  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066496  384891 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.242103231s)
	I1007 11:32:30.066525  384891 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 11:32:30.066633  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.875604078s)
	I1007 11:32:30.066671  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066686  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066701  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.066711  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.066719  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066726  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066812  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.56119506s)
	I1007 11:32:30.066833  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066844  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066928  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.557663248s)
	I1007 11:32:30.066946  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066954  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067053  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067070  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.526488091s)
	I1007 11:32:30.067077  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067083  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067087  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067090  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067097  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067099  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067273  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.087920249s)
	W1007 11:32:30.067306  384891 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:32:30.067334  384891 retry.go:31] will retry after 318.73232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:32:30.067431  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.004678888s)
	I1007 11:32:30.067452  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067472  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067555  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067585  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067595  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067604  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067610  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067660  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067681  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067687  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067878  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067912  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067919  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067926  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067932  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.070203  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.070251  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.070258  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.070269  384891 addons.go:475] Verifying addon ingress=true in "addons-246818"
	I1007 11:32:30.070513  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.070568  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.070582  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.071060  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.071101  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.071110  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.071123  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.071132  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.071872  384891 out.go:177] * Verifying ingress addon...
	I1007 11:32:30.072804  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.072826  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072856  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.072870  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.072262  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072292  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.072969  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072327  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072351  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.072993  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073063  384891 node_ready.go:35] waiting up to 6m0s for node "addons-246818" to be "Ready" ...
	I1007 11:32:30.073157  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073172  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072402  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072428  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073301  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072444  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072472  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073375  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073383  384891 addons.go:475] Verifying addon registry=true in "addons-246818"
	I1007 11:32:30.072519  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072542  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073455  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073743  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.073754  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.072602  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072689  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072713  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073830  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073838  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.073844  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.072981  384891 addons.go:475] Verifying addon metrics-server=true in "addons-246818"
	I1007 11:32:30.072586  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073928  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073935  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.073941  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.074316  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.074355  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.074361  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.074555  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.074692  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.074699  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.074712  384891 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-246818 service yakd-dashboard -n yakd-dashboard
	
	I1007 11:32:30.074754  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.074782  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.074788  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.075159  384891 out.go:177] * Verifying registry addon...
	I1007 11:32:30.077150  384891 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 11:32:30.077593  384891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 11:32:30.087836  384891 node_ready.go:49] node "addons-246818" has status "Ready":"True"
	I1007 11:32:30.087865  384891 node_ready.go:38] duration metric: took 14.756038ms for node "addons-246818" to be "Ready" ...
	I1007 11:32:30.087879  384891 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:32:30.092003  384891 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 11:32:30.092039  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:30.095848  384891 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 11:32:30.095879  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:30.110889  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.110919  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.111265  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.111273  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.111288  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	W1007 11:32:30.111382  384891 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1007 11:32:30.120282  384891 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9n6rn" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.121748  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.121764  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.122055  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.122109  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.122125  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.155261  384891 pod_ready.go:93] pod "coredns-7c65d6cfc9-9n6rn" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.155289  384891 pod_ready.go:82] duration metric: took 34.974077ms for pod "coredns-7c65d6cfc9-9n6rn" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.155302  384891 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dzpc8" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.178588  384891 pod_ready.go:93] pod "coredns-7c65d6cfc9-dzpc8" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.178617  384891 pod_ready.go:82] duration metric: took 23.305528ms for pod "coredns-7c65d6cfc9-dzpc8" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.178629  384891 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.223158  384891 pod_ready.go:93] pod "etcd-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.223187  384891 pod_ready.go:82] duration metric: took 44.549581ms for pod "etcd-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.223197  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.253914  384891 pod_ready.go:93] pod "kube-apiserver-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.253941  384891 pod_ready.go:82] duration metric: took 30.73707ms for pod "kube-apiserver-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.253954  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.386868  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:32:30.476890  384891 pod_ready.go:93] pod "kube-controller-manager-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.476938  384891 pod_ready.go:82] duration metric: took 222.974328ms for pod "kube-controller-manager-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.476959  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l8kql" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.571544  384891 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-246818" context rescaled to 1 replicas
	I1007 11:32:30.582503  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:30.582873  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:30.914008  384891 pod_ready.go:93] pod "kube-proxy-l8kql" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.914040  384891 pod_ready.go:82] duration metric: took 437.071606ms for pod "kube-proxy-l8kql" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.914052  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:31.084293  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:31.084904  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:31.277897  384891 pod_ready.go:93] pod "kube-scheduler-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:31.277934  384891 pod_ready.go:82] duration metric: took 363.871437ms for pod "kube-scheduler-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:31.277953  384891 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:31.587346  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:31.587502  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:32.188862  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:32.296683  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:32.466486  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.977770361s)
	I1007 11:32:32.466545  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.466560  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.466611  384891 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.39800642s)
	I1007 11:32:32.466755  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.0798406s)
	I1007 11:32:32.466832  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.466844  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.466862  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.466889  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:32.466906  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.466915  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.466922  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.467112  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.467127  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.467136  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.467143  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.467213  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:32.467225  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.467235  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.467250  384891 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-246818"
	I1007 11:32:32.467411  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:32.467414  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.467424  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.468956  384891 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 11:32:32.469005  384891 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 11:32:32.470557  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:32:32.471269  384891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 11:32:32.472164  384891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 11:32:32.472191  384891 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 11:32:32.502795  384891 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 11:32:32.502824  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:32.554269  384891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 11:32:32.554306  384891 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 11:32:32.588477  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:32.588751  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:32.633642  384891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:32:32.633670  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 11:32:32.817741  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:32:32.975678  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:33.085784  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:33.086499  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:33.284978  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:33.476686  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:33.582171  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:33.582790  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:33.982427  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:34.084906  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:34.085799  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:34.308214  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.490411942s)
	I1007 11:32:34.308309  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:34.308332  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:34.308649  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:34.308705  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:34.308723  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:34.308741  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:34.308752  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:34.309132  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:34.309186  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:34.309202  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:34.310559  384891 addons.go:475] Verifying addon gcp-auth=true in "addons-246818"
	I1007 11:32:34.312007  384891 out.go:177] * Verifying gcp-auth addon...
	I1007 11:32:34.314730  384891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 11:32:34.340586  384891 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 11:32:34.340612  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:34.475714  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:34.582546  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:34.583308  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:34.818688  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:34.976405  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:35.082601  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:35.084039  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:35.285036  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:35.318158  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:35.477972  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:35.583376  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:35.583561  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:35.819531  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:35.975590  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:36.082179  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:36.082337  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:36.319330  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:36.476751  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:36.582692  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:36.584000  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:37.005486  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:37.006535  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:37.083365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:37.083910  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:37.287981  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:37.319722  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:37.477822  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:37.581529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:37.582720  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:37.819884  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:37.976935  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:38.082033  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:38.082405  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:38.318841  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:38.475607  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:38.581655  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:38.582226  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:38.819241  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:38.976848  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:39.082867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:39.083274  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:39.290395  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:39.318648  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:39.476451  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:39.582171  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:39.582624  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:39.819410  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:39.977333  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:40.081612  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:40.082203  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:40.319145  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:40.476723  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:40.581603  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:40.583149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:40.818385  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:40.977851  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:41.083017  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:41.083342  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:41.317798  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:41.475982  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:41.582409  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:41.582455  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:41.786127  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:41.819529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:41.976946  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:42.082000  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:42.082192  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:42.318601  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:42.475545  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:42.582736  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:42.583438  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:42.818333  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:42.976980  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:43.083098  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:43.083595  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:43.318576  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:43.503845  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:43.582649  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:43.583155  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:43.818278  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:43.976805  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:44.082470  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:44.082807  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:44.284958  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:44.319223  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:44.476657  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:44.582711  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:44.583066  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:44.818827  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:44.976149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:45.082276  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:45.082484  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:45.318464  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:45.476894  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:45.610547  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:45.610833  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:45.975833  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:45.996872  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:46.082114  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:46.082777  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:46.317822  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:46.476436  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:46.582945  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:46.583120  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:46.784162  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:46.818445  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:46.976526  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:47.082671  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:47.082833  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:47.319655  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:47.476921  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:47.581622  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:47.582699  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:47.818529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:47.977011  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:48.084165  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:48.086044  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:48.319215  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:48.484879  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:48.582304  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:48.582986  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:48.818694  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:48.976728  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:49.081291  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:49.082282  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:49.283787  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:49.318639  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:49.476339  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:49.582576  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:49.582919  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:49.818304  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:49.976650  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:50.081972  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:50.083388  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:50.319189  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:50.476949  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:50.581903  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:50.582534  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:51.138429  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:51.138593  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:51.139224  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:51.139625  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:51.284853  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:51.319510  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:51.478092  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:51.582296  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:51.583977  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:51.821388  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:51.977408  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:52.082306  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:52.082725  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:52.320270  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:52.477071  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:52.581676  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:52.582004  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:52.819335  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:52.976826  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:53.081715  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:53.082217  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:53.286270  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:53.318565  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:53.476657  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:53.582416  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:53.582912  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:53.821038  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:53.976548  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:54.083018  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:54.083157  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:54.318909  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:54.480652  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:54.583081  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:54.583782  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:54.819006  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:54.976399  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:55.081741  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:55.082950  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:55.318290  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:55.477525  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:55.582408  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:55.582694  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:55.784044  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:55.819410  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:55.976273  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:56.081493  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:56.081873  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:56.319113  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:56.476767  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:56.582149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:56.582756  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:56.818865  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:56.977253  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:57.081925  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:57.082420  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:57.318929  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:57.785145  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:57.785322  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:57.785444  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:57.799701  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:57.875340  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:57.976458  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:58.082124  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:58.082502  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:58.318902  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:58.476352  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:58.583758  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:58.583953  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:58.817729  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:58.975913  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:59.084032  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:59.086065  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:59.346848  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:59.476648  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:59.582942  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:59.584115  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:59.821365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:59.986819  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:00.081462  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:00.083518  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:00.287257  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:00.320992  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:00.476599  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:00.583058  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:00.583512  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:00.818832  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:00.976928  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:01.082142  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:01.082422  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:01.320347  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:01.476916  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:01.581829  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:01.582058  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:01.824411  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:01.978086  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:02.082410  384891 kapi.go:107] duration metric: took 32.004807404s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 11:33:02.082721  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:02.318823  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:02.476149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:02.581365  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:02.785380  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:02.819435  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:02.981119  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:03.082298  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:03.318836  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:03.475816  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:03.581866  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:03.820271  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:03.977531  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:04.081370  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:04.318861  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:04.478185  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:04.581057  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:04.786095  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:04.818861  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:04.977359  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:05.081577  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:05.319021  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:05.476415  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:05.582041  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:05.817893  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:05.977602  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:06.081923  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:06.319212  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:06.477018  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:06.582023  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:06.818841  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:06.976129  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:07.082189  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:07.286377  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:07.319883  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:07.476167  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:07.582756  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:07.818624  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:07.977713  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:08.081834  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:08.319188  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:08.477158  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:08.582912  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:08.818256  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:08.976773  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:09.082355  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:09.319241  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:09.476152  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:09.581908  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:09.784186  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:09.817949  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:09.976974  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:10.082168  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:10.318356  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:10.477137  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:10.581246  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:10.819236  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:10.976625  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:11.082510  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:11.319088  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:11.475963  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:11.581311  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:11.785390  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:11.818393  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:11.977640  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:12.081174  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:12.319522  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:12.476944  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:12.582131  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:12.818446  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:12.976621  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:13.081988  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:13.318911  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:13.484798  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:13.582395  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:13.819383  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:13.977648  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:14.082158  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:14.285577  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:14.318713  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:14.475847  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:14.582159  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:14.818441  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:14.977209  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:15.081963  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:15.318737  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:15.476205  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:15.583061  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:15.819153  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:15.976561  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:16.081683  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:16.318410  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:16.476630  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:16.581615  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:16.784072  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:16.818076  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:16.977198  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:17.081611  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:17.320061  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:17.476515  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:17.581786  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:17.818618  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:17.976464  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:18.084173  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:18.318030  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:18.477107  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:18.586160  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:18.784408  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:18.818855  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:18.975975  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:19.083601  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:19.319129  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:19.476165  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:19.581505  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:19.818001  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:19.976718  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:20.082101  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:20.319192  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:20.476616  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:20.581717  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:20.785149  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:20.818020  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:20.976775  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:21.082210  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:21.318711  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:21.475778  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:21.582480  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:21.819356  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:21.977763  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:22.082225  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:22.318697  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:22.476177  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:22.582015  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:22.817984  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:22.976500  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:23.081605  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:23.284652  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:23.319106  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:23.476419  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:23.581621  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:23.818519  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:23.976857  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:24.082273  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:24.319210  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:24.476471  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:24.581691  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:24.818346  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:24.976944  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:25.082182  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:25.285349  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:25.319385  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:25.476777  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:25.582609  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:25.818485  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:25.977168  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:26.082176  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:26.318509  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:26.476390  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:26.581578  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:26.819122  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:26.976649  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:27.081846  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:27.285801  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:27.319965  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:27.476748  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:27.582786  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:27.820119  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:27.977567  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:28.081776  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:28.321486  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:28.476034  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:28.580919  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:28.818302  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:28.976750  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:29.082261  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:29.318773  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:29.476952  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:29.582302  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:29.784755  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:29.818641  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:29.975885  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:30.082754  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:30.318788  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:30.476267  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:30.581482  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:30.818790  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:30.976169  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:31.082040  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:31.318394  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:31.477328  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:31.581590  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:31.785001  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:31.818455  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:31.977285  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:32.082645  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:32.319761  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:32.475996  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:32.580957  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:32.818618  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:32.981189  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:33.082222  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:33.318499  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:33.477371  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:33.581430  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:33.819139  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:33.976629  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:34.348998  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:34.349111  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:34.354582  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:34.477183  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:34.582017  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:34.818854  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:34.975708  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:35.082682  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:35.318096  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:35.476479  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:35.581982  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:35.818348  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:35.976667  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:36.082093  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:36.319301  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:36.477260  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:36.581116  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:36.785438  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:36.818479  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:36.976498  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:37.081603  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:37.318719  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:37.476366  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:37.582055  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:37.818735  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:37.975866  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:38.081879  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:38.318601  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:38.484592  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:38.582279  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:38.818547  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:38.975841  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:39.081986  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:39.284349  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:39.317923  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:39.476365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:39.582175  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:39.818974  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:39.975890  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:40.082033  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:40.318628  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:40.518043  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:40.582189  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:40.819150  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:40.979733  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:41.081822  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:41.284675  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:41.318611  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:41.475350  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:41.581870  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:41.817872  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:41.975624  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:42.082150  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:42.319800  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:42.479033  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:42.583338  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:42.819134  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:42.978046  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:43.083708  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:43.318837  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:43.476705  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:43.582056  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:43.785109  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:43.818104  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:43.976109  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:44.081416  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:44.318991  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:44.476151  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:44.596289  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:44.819051  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:44.976616  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:45.081745  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:45.318842  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:45.476739  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:45.582727  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:45.817867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:45.976600  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:46.082267  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:46.288414  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:46.319714  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:46.476643  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:46.582493  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:46.818948  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:46.977533  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:47.082182  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:47.318238  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:47.476983  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:47.583066  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:47.819252  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:47.978774  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:48.082507  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:48.318486  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:48.476123  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:48.583163  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:48.784677  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:48.822387  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:48.986510  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:49.086137  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:49.323706  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:49.481895  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:49.582564  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:49.819675  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:49.976031  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:50.082594  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:50.319558  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:50.478668  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:50.588098  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:50.788097  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:50.844238  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:50.976971  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:51.083864  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:51.319080  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:51.476545  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:51.581625  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:51.820026  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:51.986619  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:52.092476  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:52.319404  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:52.480622  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:52.588382  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:52.818422  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:52.976771  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:53.082063  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:53.286041  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:53.318561  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:53.476866  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:53.584944  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:53.818557  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:53.976619  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:54.081420  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:54.318813  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:54.475954  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:54.582481  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:54.818913  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:54.976100  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:55.082174  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:55.287305  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:55.318058  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:55.476320  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:55.582149  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:55.826567  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:55.981042  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:56.081276  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:56.319521  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:56.475650  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:56.581596  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:56.818574  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:56.975996  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:57.082643  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:57.626615  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:57.627586  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:57.627720  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:57.631472  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:57.818870  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:57.979364  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:58.081587  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:58.318085  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:58.476312  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:58.581156  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:58.826426  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:58.978242  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:59.081303  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:59.318911  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:59.478537  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:59.582057  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:59.785115  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:59.818776  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:59.980469  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:34:00.082381  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:00.319529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:00.477985  384891 kapi.go:107] duration metric: took 1m28.006709237s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 11:34:00.581976  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:00.819378  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:01.082606  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:01.319729  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:01.582377  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:01.785853  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:34:01.819079  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:02.082352  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:02.318806  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:02.583133  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:02.819833  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:03.082070  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:03.319057  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:03.582749  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:03.818867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:04.081986  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:04.285341  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:34:04.318345  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:04.581902  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:04.818896  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:05.082540  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:05.319169  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:05.582754  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:05.818610  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:06.081323  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:06.286945  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:34:06.319553  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:06.581733  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:06.819609  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:07.081656  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:07.288453  384891 pod_ready.go:93] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"True"
	I1007 11:34:07.288493  384891 pod_ready.go:82] duration metric: took 1m36.010528889s for pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace to be "Ready" ...
	I1007 11:34:07.288510  384891 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-8tqmv" in "kube-system" namespace to be "Ready" ...
	I1007 11:34:07.299285  384891 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-8tqmv" in "kube-system" namespace has status "Ready":"True"
	I1007 11:34:07.299313  384891 pod_ready.go:82] duration metric: took 10.79378ms for pod "nvidia-device-plugin-daemonset-8tqmv" in "kube-system" namespace to be "Ready" ...
	I1007 11:34:07.299332  384891 pod_ready.go:39] duration metric: took 1m37.211435839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:34:07.299353  384891 api_server.go:52] waiting for apiserver process to appear ...
	I1007 11:34:07.299401  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:34:07.299455  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:34:07.321320  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:07.350199  384891 cri.go:89] found id: "c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:07.350228  384891 cri.go:89] found id: ""
	I1007 11:34:07.350239  384891 logs.go:282] 1 containers: [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8]
	I1007 11:34:07.350311  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.355340  384891 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:34:07.355425  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:34:07.403255  384891 cri.go:89] found id: "1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:07.403284  384891 cri.go:89] found id: ""
	I1007 11:34:07.403293  384891 logs.go:282] 1 containers: [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4]
	I1007 11:34:07.403356  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.408181  384891 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:34:07.408259  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:34:07.456781  384891 cri.go:89] found id: "0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:07.456810  384891 cri.go:89] found id: ""
	I1007 11:34:07.456821  384891 logs.go:282] 1 containers: [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965]
	I1007 11:34:07.456880  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.461365  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:34:07.461432  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:34:07.503869  384891 cri.go:89] found id: "c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:07.503900  384891 cri.go:89] found id: ""
	I1007 11:34:07.503911  384891 logs.go:282] 1 containers: [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a]
	I1007 11:34:07.503986  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.508824  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:34:07.508912  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:34:07.553417  384891 cri.go:89] found id: "07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:07.553445  384891 cri.go:89] found id: ""
	I1007 11:34:07.553453  384891 logs.go:282] 1 containers: [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e]
	I1007 11:34:07.553507  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.558607  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:34:07.558691  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:34:07.582482  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:07.609104  384891 cri.go:89] found id: "8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:07.609133  384891 cri.go:89] found id: ""
	I1007 11:34:07.609143  384891 logs.go:282] 1 containers: [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae]
	I1007 11:34:07.609209  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.614014  384891 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:34:07.614095  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:34:07.669307  384891 cri.go:89] found id: ""
	I1007 11:34:07.669339  384891 logs.go:282] 0 containers: []
	W1007 11:34:07.669348  384891 logs.go:284] No container was found matching "kindnet"
	I1007 11:34:07.669360  384891 logs.go:123] Gathering logs for dmesg ...
	I1007 11:34:07.669374  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:34:07.692510  384891 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:34:07.692553  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:34:07.820538  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:07.833306  384891 logs.go:123] Gathering logs for kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] ...
	I1007 11:34:07.833344  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:07.881834  384891 logs.go:123] Gathering logs for kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] ...
	I1007 11:34:07.881872  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:07.922102  384891 logs.go:123] Gathering logs for kubelet ...
	I1007 11:34:07.922135  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 11:34:07.994930  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:07.995159  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:08.014966  384891 logs.go:123] Gathering logs for coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] ...
	I1007 11:34:08.015007  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:08.059810  384891 logs.go:123] Gathering logs for kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] ...
	I1007 11:34:08.059846  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:08.082446  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:08.118806  384891 logs.go:123] Gathering logs for kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] ...
	I1007 11:34:08.118857  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:08.183364  384891 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:34:08.183410  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:34:08.319460  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:08.583736  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:08.819563  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:08.851907  384891 logs.go:123] Gathering logs for container status ...
	I1007 11:34:08.851975  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:34:08.905544  384891 logs.go:123] Gathering logs for etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] ...
	I1007 11:34:08.905576  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:08.973774  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:08.973822  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 11:34:08.973898  384891 out.go:270] X Problems detected in kubelet:
	W1007 11:34:08.973917  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:08.973935  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:08.973949  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:08.973962  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:34:09.082037  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:09.319301  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:09.582172  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:09.818720  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:10.083461  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:10.318771  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:10.582330  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:10.819089  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:11.081911  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:11.321748  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:11.581492  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:11.818375  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:12.082063  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:12.319965  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:12.582369  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:12.819383  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:13.082206  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:13.318240  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:13.583364  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:13.818316  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:14.081551  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:14.318945  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:14.581789  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:14.819411  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:15.081875  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:15.318853  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:15.582528  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:15.818834  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:16.081977  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:16.318787  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:16.582509  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:16.818784  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:17.082467  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:17.319180  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:17.583829  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:17.819020  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:18.083259  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:18.318588  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:18.585693  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:18.818464  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:18.975488  384891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:34:18.998847  384891 api_server.go:72] duration metric: took 1m57.962235499s to wait for apiserver process to appear ...
	I1007 11:34:18.998888  384891 api_server.go:88] waiting for apiserver healthz status ...
	I1007 11:34:18.998936  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:34:18.999018  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:34:19.040445  384891 cri.go:89] found id: "c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:19.040469  384891 cri.go:89] found id: ""
	I1007 11:34:19.040485  384891 logs.go:282] 1 containers: [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8]
	I1007 11:34:19.040551  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.046554  384891 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:34:19.046621  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:34:19.082671  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:19.092133  384891 cri.go:89] found id: "1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:19.092166  384891 cri.go:89] found id: ""
	I1007 11:34:19.092176  384891 logs.go:282] 1 containers: [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4]
	I1007 11:34:19.092241  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.096808  384891 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:34:19.096908  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:34:19.138989  384891 cri.go:89] found id: "0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:19.139023  384891 cri.go:89] found id: ""
	I1007 11:34:19.139035  384891 logs.go:282] 1 containers: [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965]
	I1007 11:34:19.139100  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.143619  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:34:19.143693  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:34:19.191484  384891 cri.go:89] found id: "c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:19.191512  384891 cri.go:89] found id: ""
	I1007 11:34:19.191523  384891 logs.go:282] 1 containers: [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a]
	I1007 11:34:19.191676  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.196448  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:34:19.196521  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:34:19.242455  384891 cri.go:89] found id: "07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:19.242492  384891 cri.go:89] found id: ""
	I1007 11:34:19.242503  384891 logs.go:282] 1 containers: [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e]
	I1007 11:34:19.242564  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.248534  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:34:19.248629  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:34:19.291085  384891 cri.go:89] found id: "8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:19.291114  384891 cri.go:89] found id: ""
	I1007 11:34:19.291124  384891 logs.go:282] 1 containers: [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae]
	I1007 11:34:19.291194  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.295722  384891 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:34:19.295810  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:34:19.318088  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:19.340630  384891 cri.go:89] found id: ""
	I1007 11:34:19.340658  384891 logs.go:282] 0 containers: []
	W1007 11:34:19.340668  384891 logs.go:284] No container was found matching "kindnet"
	I1007 11:34:19.340678  384891 logs.go:123] Gathering logs for kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] ...
	I1007 11:34:19.340701  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:19.398366  384891 logs.go:123] Gathering logs for kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] ...
	I1007 11:34:19.398413  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:19.441039  384891 logs.go:123] Gathering logs for kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] ...
	I1007 11:34:19.441071  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:19.515511  384891 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:34:19.515559  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:34:19.581392  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:19.820008  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:20.082996  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:20.318698  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:20.371437  384891 logs.go:123] Gathering logs for kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] ...
	I1007 11:34:20.371566  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:20.421572  384891 logs.go:123] Gathering logs for container status ...
	I1007 11:34:20.421622  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:34:20.473855  384891 logs.go:123] Gathering logs for kubelet ...
	I1007 11:34:20.473898  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 11:34:20.539155  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:20.539346  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:20.560434  384891 logs.go:123] Gathering logs for dmesg ...
	I1007 11:34:20.560477  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:34:20.578609  384891 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:34:20.578644  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:34:20.582162  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:20.705740  384891 logs.go:123] Gathering logs for etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] ...
	I1007 11:34:20.705772  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:20.771436  384891 logs.go:123] Gathering logs for coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] ...
	I1007 11:34:20.771482  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:20.817335  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:20.817370  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 11:34:20.817442  384891 out.go:270] X Problems detected in kubelet:
	W1007 11:34:20.817457  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:20.817470  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:20.817479  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:20.817488  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:34:20.818512  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:21.082056  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:21.318867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:21.582262  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:21.818795  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:22.083232  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:22.318990  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:22.582413  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:22.819076  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:23.082537  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:23.318303  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:23.583644  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:23.818519  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:24.081687  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:24.318430  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:24.582120  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:24.819111  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:25.086365  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:25.320747  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:25.582278  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:25.819707  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:26.082436  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:26.319403  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:26.582434  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:26.819099  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:27.082857  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:27.318289  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:27.581568  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:27.819777  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:28.081999  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:28.318751  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:28.582679  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:28.818757  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:29.082323  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:29.318830  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:29.582031  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:29.818723  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:30.082134  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:30.319885  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:30.581940  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:30.818806  384891 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1007 11:34:30.824530  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:30.825860  384891 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1007 11:34:30.826750  384891 api_server.go:141] control plane version: v1.31.1
	I1007 11:34:30.826782  384891 api_server.go:131] duration metric: took 11.827885179s to wait for apiserver health ...
	I1007 11:34:30.826793  384891 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 11:34:30.826818  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:34:30.826869  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:34:30.868009  384891 cri.go:89] found id: "c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:30.868043  384891 cri.go:89] found id: ""
	I1007 11:34:30.868054  384891 logs.go:282] 1 containers: [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8]
	I1007 11:34:30.868116  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:30.872897  384891 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:34:30.872982  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:34:30.921766  384891 cri.go:89] found id: "1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:30.921797  384891 cri.go:89] found id: ""
	I1007 11:34:30.921807  384891 logs.go:282] 1 containers: [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4]
	I1007 11:34:30.921872  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:30.926658  384891 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:34:30.926751  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:34:30.967084  384891 cri.go:89] found id: "0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:30.967110  384891 cri.go:89] found id: ""
	I1007 11:34:30.967121  384891 logs.go:282] 1 containers: [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965]
	I1007 11:34:30.967184  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:30.971720  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:34:30.971806  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:34:31.014014  384891 cri.go:89] found id: "c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:31.014051  384891 cri.go:89] found id: ""
	I1007 11:34:31.014063  384891 logs.go:282] 1 containers: [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a]
	I1007 11:34:31.014128  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:31.019324  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:34:31.019476  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:34:31.061685  384891 cri.go:89] found id: "07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:31.061719  384891 cri.go:89] found id: ""
	I1007 11:34:31.061730  384891 logs.go:282] 1 containers: [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e]
	I1007 11:34:31.061791  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:31.066589  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:34:31.066673  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:34:31.081745  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:31.112923  384891 cri.go:89] found id: "8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:31.112948  384891 cri.go:89] found id: ""
	I1007 11:34:31.112957  384891 logs.go:282] 1 containers: [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae]
	I1007 11:34:31.113010  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:31.118016  384891 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:34:31.118089  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:34:31.171358  384891 cri.go:89] found id: ""
	I1007 11:34:31.171390  384891 logs.go:282] 0 containers: []
	W1007 11:34:31.171402  384891 logs.go:284] No container was found matching "kindnet"
	I1007 11:34:31.171415  384891 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:34:31.171439  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:34:31.307909  384891 logs.go:123] Gathering logs for kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] ...
	I1007 11:34:31.307947  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:31.318066  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:31.370102  384891 logs.go:123] Gathering logs for coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] ...
	I1007 11:34:31.370145  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:31.412898  384891 logs.go:123] Gathering logs for kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] ...
	I1007 11:34:31.412929  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:31.455361  384891 logs.go:123] Gathering logs for kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] ...
	I1007 11:34:31.455399  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:31.525681  384891 logs.go:123] Gathering logs for container status ...
	I1007 11:34:31.525726  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:34:31.581299  384891 logs.go:123] Gathering logs for kubelet ...
	I1007 11:34:31.581352  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 11:34:31.582018  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1007 11:34:31.650024  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:31.650226  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:31.671782  384891 logs.go:123] Gathering logs for dmesg ...
	I1007 11:34:31.671817  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:34:31.692198  384891 logs.go:123] Gathering logs for etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] ...
	I1007 11:34:31.692235  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:31.760832  384891 logs.go:123] Gathering logs for kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] ...
	I1007 11:34:31.760880  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:31.809091  384891 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:34:31.809129  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:34:31.818667  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:32.083426  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:32.318110  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:32.582254  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:32.686330  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:32.686374  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 11:34:32.686450  384891 out.go:270] X Problems detected in kubelet:
	W1007 11:34:32.686461  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:32.686473  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:32.686481  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:32.686488  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:34:32.820112  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:33.082098  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:33.319357  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:33.583417  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:33.819012  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:34.082102  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:34.318854  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:34.582183  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:34.819365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:35.082034  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:35.318900  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:35.582595  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:35.819015  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:36.081981  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:36.319063  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:36.582084  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:36.818989  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:37.082637  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:37.318307  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:37.582037  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:37.819608  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:38.082058  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:38.319071  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:38.582896  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:38.818216  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:39.082926  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:39.318258  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:39.582671  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:39.819037  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:40.082183  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:40.319106  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:40.582450  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:40.818611  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:41.082311  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:41.319060  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:41.582150  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:41.819047  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:42.081964  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:42.318809  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:42.582264  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:42.694665  384891 system_pods.go:59] 17 kube-system pods found
	I1007 11:34:42.694702  384891 system_pods.go:61] "coredns-7c65d6cfc9-9n6rn" [a65cd5da-6560-4c5a-9311-ca855450e9a9] Running
	I1007 11:34:42.694707  384891 system_pods.go:61] "csi-hostpath-attacher-0" [91820122-4ed3-4251-b1fd-f63756f7e814] Running
	I1007 11:34:42.694711  384891 system_pods.go:61] "csi-hostpath-resizer-0" [2a120d65-04bc-42e4-b324-49d7300d4ed8] Running
	I1007 11:34:42.694716  384891 system_pods.go:61] "csi-hostpathplugin-d8rpq" [52c9f352-e70d-47a1-907f-b13d53f6bc60] Running
	I1007 11:34:42.694719  384891 system_pods.go:61] "etcd-addons-246818" [bb627733-dff2-491c-8308-3ac74e5903dc] Running
	I1007 11:34:42.694723  384891 system_pods.go:61] "kube-apiserver-addons-246818" [e9c4665f-2478-4c1f-9cbf-0619491257dd] Running
	I1007 11:34:42.694726  384891 system_pods.go:61] "kube-controller-manager-addons-246818" [5c61899b-9f40-4b5d-b0ab-a796a3c1c8ba] Running
	I1007 11:34:42.694730  384891 system_pods.go:61] "kube-ingress-dns-minikube" [830d0746-7b01-4a11-a0ee-8f9298e96c17] Running
	I1007 11:34:42.694733  384891 system_pods.go:61] "kube-proxy-l8kql" [847b99db-d42a-483a-87e5-f70b492c2430] Running
	I1007 11:34:42.694738  384891 system_pods.go:61] "kube-scheduler-addons-246818" [1fbb2a15-cc03-4580-94f0-5afee1897222] Running
	I1007 11:34:42.694741  384891 system_pods.go:61] "metrics-server-84c5f94fbc-q6j6p" [f37e3b43-4ce4-4879-babb-e6efdf0f3163] Running
	I1007 11:34:42.694746  384891 system_pods.go:61] "nvidia-device-plugin-daemonset-8tqmv" [69715854-4ded-41a3-83c7-1c8c927935d3] Running
	I1007 11:34:42.694749  384891 system_pods.go:61] "registry-66c9cd494c-pdbhh" [0abb32c0-d3dc-447d-a3b9-d672a6f088ff] Running
	I1007 11:34:42.694752  384891 system_pods.go:61] "registry-proxy-nczxq" [f47e8fd0-0149-4ade-8c43-90e4eeb9b7cf] Running
	I1007 11:34:42.694756  384891 system_pods.go:61] "snapshot-controller-56fcc65765-q9hxr" [189d7791-dda8-49aa-b59d-36fdbc31d559] Running
	I1007 11:34:42.694759  384891 system_pods.go:61] "snapshot-controller-56fcc65765-q9tkd" [1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91] Running
	I1007 11:34:42.694763  384891 system_pods.go:61] "storage-provisioner" [2f27f3bc-8533-41d5-b82e-373f84b67952] Running
	I1007 11:34:42.694769  384891 system_pods.go:74] duration metric: took 11.867969785s to wait for pod list to return data ...
	I1007 11:34:42.694780  384891 default_sa.go:34] waiting for default service account to be created ...
	I1007 11:34:42.697608  384891 default_sa.go:45] found service account: "default"
	I1007 11:34:42.697642  384891 default_sa.go:55] duration metric: took 2.852196ms for default service account to be created ...
	I1007 11:34:42.697656  384891 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 11:34:42.706719  384891 system_pods.go:86] 17 kube-system pods found
	I1007 11:34:42.706756  384891 system_pods.go:89] "coredns-7c65d6cfc9-9n6rn" [a65cd5da-6560-4c5a-9311-ca855450e9a9] Running
	I1007 11:34:42.706762  384891 system_pods.go:89] "csi-hostpath-attacher-0" [91820122-4ed3-4251-b1fd-f63756f7e814] Running
	I1007 11:34:42.706766  384891 system_pods.go:89] "csi-hostpath-resizer-0" [2a120d65-04bc-42e4-b324-49d7300d4ed8] Running
	I1007 11:34:42.706770  384891 system_pods.go:89] "csi-hostpathplugin-d8rpq" [52c9f352-e70d-47a1-907f-b13d53f6bc60] Running
	I1007 11:34:42.706774  384891 system_pods.go:89] "etcd-addons-246818" [bb627733-dff2-491c-8308-3ac74e5903dc] Running
	I1007 11:34:42.706778  384891 system_pods.go:89] "kube-apiserver-addons-246818" [e9c4665f-2478-4c1f-9cbf-0619491257dd] Running
	I1007 11:34:42.706782  384891 system_pods.go:89] "kube-controller-manager-addons-246818" [5c61899b-9f40-4b5d-b0ab-a796a3c1c8ba] Running
	I1007 11:34:42.706788  384891 system_pods.go:89] "kube-ingress-dns-minikube" [830d0746-7b01-4a11-a0ee-8f9298e96c17] Running
	I1007 11:34:42.706791  384891 system_pods.go:89] "kube-proxy-l8kql" [847b99db-d42a-483a-87e5-f70b492c2430] Running
	I1007 11:34:42.706795  384891 system_pods.go:89] "kube-scheduler-addons-246818" [1fbb2a15-cc03-4580-94f0-5afee1897222] Running
	I1007 11:34:42.706800  384891 system_pods.go:89] "metrics-server-84c5f94fbc-q6j6p" [f37e3b43-4ce4-4879-babb-e6efdf0f3163] Running
	I1007 11:34:42.706805  384891 system_pods.go:89] "nvidia-device-plugin-daemonset-8tqmv" [69715854-4ded-41a3-83c7-1c8c927935d3] Running
	I1007 11:34:42.706808  384891 system_pods.go:89] "registry-66c9cd494c-pdbhh" [0abb32c0-d3dc-447d-a3b9-d672a6f088ff] Running
	I1007 11:34:42.706812  384891 system_pods.go:89] "registry-proxy-nczxq" [f47e8fd0-0149-4ade-8c43-90e4eeb9b7cf] Running
	I1007 11:34:42.706815  384891 system_pods.go:89] "snapshot-controller-56fcc65765-q9hxr" [189d7791-dda8-49aa-b59d-36fdbc31d559] Running
	I1007 11:34:42.706819  384891 system_pods.go:89] "snapshot-controller-56fcc65765-q9tkd" [1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91] Running
	I1007 11:34:42.706823  384891 system_pods.go:89] "storage-provisioner" [2f27f3bc-8533-41d5-b82e-373f84b67952] Running
	I1007 11:34:42.706835  384891 system_pods.go:126] duration metric: took 9.170306ms to wait for k8s-apps to be running ...
	I1007 11:34:42.706847  384891 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 11:34:42.706901  384891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:34:42.725146  384891 system_svc.go:56] duration metric: took 18.286276ms WaitForService to wait for kubelet
	I1007 11:34:42.725182  384891 kubeadm.go:582] duration metric: took 2m21.688585174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:34:42.725203  384891 node_conditions.go:102] verifying NodePressure condition ...
	I1007 11:34:42.728139  384891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 11:34:42.728194  384891 node_conditions.go:123] node cpu capacity is 2
	I1007 11:34:42.728211  384891 node_conditions.go:105] duration metric: took 3.001618ms to run NodePressure ...
	I1007 11:34:42.728226  384891 start.go:241] waiting for startup goroutines ...
	I1007 11:34:42.819517  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:43.082232  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:43.319050  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:43.582210  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:43.819348  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:44.081779  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:44.318592  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:44.581627  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:44.818069  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:45.082710  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:45.319371  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:45.581377  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:45.818428  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:46.083012  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:46.320632  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:46.581260  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:46.819209  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:47.082692  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:47.318983  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:47.582357  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:47.823398  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:48.082344  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:48.318267  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:48.581439  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:48.820231  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:49.082123  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:49.318989  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:49.582868  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:49.820088  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:50.084119  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:50.318944  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:50.581942  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:50.818634  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:51.082987  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:51.319771  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:51.582116  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:51.819251  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:52.082449  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:52.318176  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:52.582176  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:52.819387  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:53.081651  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:53.319024  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:53.582594  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:53.819107  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:54.082146  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:54.318787  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:54.582627  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:54.818201  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:55.204294  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:55.319426  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:55.583686  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:55.819569  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:56.082731  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:56.318631  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:56.581113  384891 kapi.go:107] duration metric: took 2m26.503967901s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 11:34:56.819419  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:57.319107  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:57.818908  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:58.322546  384891 kapi.go:107] duration metric: took 2m24.007812557s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 11:34:58.323908  384891 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-246818 cluster.
	I1007 11:34:58.325270  384891 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 11:34:58.326576  384891 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 11:34:58.328149  384891 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, metrics-server, inspektor-gadget, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1007 11:34:58.329558  384891 addons.go:510] duration metric: took 2m37.292909623s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin metrics-server inspektor-gadget cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1007 11:34:58.329605  384891 start.go:246] waiting for cluster config update ...
	I1007 11:34:58.329625  384891 start.go:255] writing updated cluster config ...
	I1007 11:34:58.329888  384891 ssh_runner.go:195] Run: rm -f paused
	I1007 11:34:58.382842  384891 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 11:34:58.384942  384891 out.go:177] * Done! kubectl is now configured to use "addons-246818" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.300662766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301736300601086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=522b9357-e1e9-415b-b422-443a4f5e076c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.301723573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a16357f-9863-436e-9470-f563fc8b8a72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.301783661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a16357f-9863-436e-9470-f563fc8b8a72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.302199727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebc0ebd2dc5ea489727702fe5287176a8dfee72d4a838bf924a405e1bc8d5263,PodSandboxId:98d35412f9c27800e5c40501f3ede13c5e838a76ca75f3909c983f43d9e91aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728301586316602787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d331845-59f4-4092-938c-97591d81951b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907
f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
2c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17283008317275923
99,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e,PodSandboxId:ddcecf5804f3432f425ed1b78bdd0add063adc43981b8616db59207cbca9cbdb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728300824605663492,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6kwqv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 061506d6-ef07-4852-b9f4-9c28e30da0be,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f
11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Imag
e:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85661d1841a178ef76cf9daa9c30150b7e5f427eb86c2a77593ab5a880ef168,PodSandboxId:058d68203dc5a10d4ad6bf69b9b157da8f2de1df0dc98b9b6a2db3c5374fe3ec,Metadata:&Container
Metadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728300782851943338,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6j6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37e3b43-4ce4-4879-babb-e6efdf0f3163,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe11773
5565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626
cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodSandboxId:4af52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8,PodSandboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxId:660fb1dd2d72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99fa76880ed25bbc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a16357f-9863-436e-9470-f563fc8b8a72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.341854891Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7594acbc-4f6a-4d20-bff7-0e561ab04e37 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.341947418Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7594acbc-4f6a-4d20-bff7-0e561ab04e37 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.343117043Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=720adaba-deb2-4bc9-b65c-2896e9c546e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.344307382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301736344240159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=720adaba-deb2-4bc9-b65c-2896e9c546e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.345033052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d07057bf-b114-43a7-afed-9dcc6de90b32 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.345104131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d07057bf-b114-43a7-afed-9dcc6de90b32 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.345539716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebc0ebd2dc5ea489727702fe5287176a8dfee72d4a838bf924a405e1bc8d5263,PodSandboxId:98d35412f9c27800e5c40501f3ede13c5e838a76ca75f3909c983f43d9e91aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728301586316602787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d331845-59f4-4092-938c-97591d81951b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907
f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
2c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17283008317275923
99,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e,PodSandboxId:ddcecf5804f3432f425ed1b78bdd0add063adc43981b8616db59207cbca9cbdb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728300824605663492,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6kwqv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 061506d6-ef07-4852-b9f4-9c28e30da0be,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f
11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Imag
e:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85661d1841a178ef76cf9daa9c30150b7e5f427eb86c2a77593ab5a880ef168,PodSandboxId:058d68203dc5a10d4ad6bf69b9b157da8f2de1df0dc98b9b6a2db3c5374fe3ec,Metadata:&Container
Metadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728300782851943338,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6j6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37e3b43-4ce4-4879-babb-e6efdf0f3163,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe11773
5565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626
cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodSandboxId:4af52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8,PodSandboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxId:660fb1dd2d72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99fa76880ed25bbc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d07057bf-b114-43a7-afed-9dcc6de90b32 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.389563482Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b173239d-4d97-4323-94a3-49945ad7d1c4 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.389677961Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b173239d-4d97-4323-94a3-49945ad7d1c4 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.390829633Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7822fe55-f934-4e20-80eb-619d621c6fb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.392112753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301736392057473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7822fe55-f934-4e20-80eb-619d621c6fb3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.392896593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9831643c-201c-42eb-8f3e-ddec5a181c64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.392966183Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9831643c-201c-42eb-8f3e-ddec5a181c64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.393580639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebc0ebd2dc5ea489727702fe5287176a8dfee72d4a838bf924a405e1bc8d5263,PodSandboxId:98d35412f9c27800e5c40501f3ede13c5e838a76ca75f3909c983f43d9e91aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728301586316602787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d331845-59f4-4092-938c-97591d81951b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907
f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
2c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17283008317275923
99,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e,PodSandboxId:ddcecf5804f3432f425ed1b78bdd0add063adc43981b8616db59207cbca9cbdb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728300824605663492,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6kwqv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 061506d6-ef07-4852-b9f4-9c28e30da0be,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f
11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Imag
e:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85661d1841a178ef76cf9daa9c30150b7e5f427eb86c2a77593ab5a880ef168,PodSandboxId:058d68203dc5a10d4ad6bf69b9b157da8f2de1df0dc98b9b6a2db3c5374fe3ec,Metadata:&Container
Metadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728300782851943338,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6j6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37e3b43-4ce4-4879-babb-e6efdf0f3163,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe11773
5565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626
cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodSandboxId:4af52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8,PodSandboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxId:660fb1dd2d72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99fa76880ed25bbc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9831643c-201c-42eb-8f3e-ddec5a181c64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.439635198Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ef80c5c-a958-4a29-ae69-bde7919763cb name=/runtime.v1.RuntimeService/Version
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.439728737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ef80c5c-a958-4a29-ae69-bde7919763cb name=/runtime.v1.RuntimeService/Version
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.441014985Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f7571e9-dd26-4941-9fea-f55f0912d3cb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.442963619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301736442930556,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f7571e9-dd26-4941-9fea-f55f0912d3cb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.443583990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ed4ab9b-6089-4492-bd79-268e76d2bf43 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.443658124Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ed4ab9b-6089-4492-bd79-268e76d2bf43 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:48:56 addons-246818 crio[659]: time="2024-10-07 11:48:56.444104427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebc0ebd2dc5ea489727702fe5287176a8dfee72d4a838bf924a405e1bc8d5263,PodSandboxId:98d35412f9c27800e5c40501f3ede13c5e838a76ca75f3909c983f43d9e91aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728301586316602787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d331845-59f4-4092-938c-97591d81951b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907
f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
2c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17283008317275923
99,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e,PodSandboxId:ddcecf5804f3432f425ed1b78bdd0add063adc43981b8616db59207cbca9cbdb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},
UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1728300824605663492,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6kwqv,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 061506d6-ef07-4852-b9f4-9c28e30da0be,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f
11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Imag
e:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85661d1841a178ef76cf9daa9c30150b7e5f427eb86c2a77593ab5a880ef168,PodSandboxId:058d68203dc5a10d4ad6bf69b9b157da8f2de1df0dc98b9b6a2db3c5374fe3ec,Metadata:&Container
Metadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728300782851943338,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-q6j6p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f37e3b43-4ce4-4879-babb-e6efdf0f3163,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe11773
5565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626
cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termi
nation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodSandboxId:4af52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8,PodSandboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxId:660fb1dd2d72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99fa76880ed25bbc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.ku
bernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ed4ab9b-6089-4492-bd79-268e76d2bf43 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	ebc0ebd2dc5ea       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          2 minutes ago       Running             busybox                                  0                   98d35412f9c27       busybox
	018072193f0f9       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                                              5 minutes ago       Running             nginx                                    0                   d49de85842a0d       nginx
	57828bc9be9d5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          14 minutes ago      Running             csi-snapshotter                          0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	47756e0237323       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          14 minutes ago      Running             csi-provisioner                          0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	55b8cd6e90ea4       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 minutes ago      Running             liveness-probe                           0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	870a7af54cbdc       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           15 minutes ago      Running             hostpath                                 0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	d50c7be11d706       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                15 minutes ago      Running             node-driver-registrar                    0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	cb3c49e5a57e0       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             15 minutes ago      Running             csi-attacher                             0                   8bd0ba34143b7       csi-hostpath-attacher-0
	79ce9975927b1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   15 minutes ago      Running             csi-external-health-monitor-controller   0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	ea77c9e2ea78d       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              15 minutes ago      Running             csi-resizer                              0                   37fe00b1ba658       csi-hostpath-resizer-0
	1944cdab75253       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             15 minutes ago      Running             local-path-provisioner                   0                   ddcecf5804f34       local-path-provisioner-86d989889c-6kwqv
	72f67a14ad810       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      15 minutes ago      Running             volume-snapshot-controller               0                   b45a2edd29772       snapshot-controller-56fcc65765-q9tkd
	d4f852c16268c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      15 minutes ago      Running             volume-snapshot-controller               0                   ad1976920b544       snapshot-controller-56fcc65765-q9hxr
	b85661d1841a1       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        15 minutes ago      Running             metrics-server                           0                   058d68203dc5a       metrics-server-84c5f94fbc-q6j6p
	64b3fe56b0b4d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             16 minutes ago      Running             storage-provisioner                      0                   4d66856d95293       storage-provisioner
	0282c1110abcf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             16 minutes ago      Running             coredns                                  0                   81ad4b72c15e5       coredns-7c65d6cfc9-9n6rn
	07021166cf32e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             16 minutes ago      Running             kube-proxy                               0                   946e3367f9d80       kube-proxy-l8kql
	c89d7f8df3494       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             16 minutes ago      Running             kube-scheduler                           0                   660fb1dd2d723       kube-scheduler-addons-246818
	8f63af3616abb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             16 minutes ago      Running             kube-controller-manager                  0                   4af52b2553e39       kube-controller-manager-addons-246818
	1c2b9ede2bcb3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             16 minutes ago      Running             etcd                                     0                   d314e18e8281d       etcd-addons-246818
	c555e8eeff012       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             16 minutes ago      Running             kube-apiserver                           0                   9eda8e53f6a53       kube-apiserver-addons-246818
	
	
	==> coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] <==
	[INFO] 10.244.0.20:35979 - 50929 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000094399s
	[INFO] 10.244.0.20:35979 - 42029 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065933s
	[INFO] 10.244.0.20:35979 - 25183 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057751s
	[INFO] 10.244.0.20:35979 - 54907 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000095463s
	[INFO] 10.244.0.20:34909 - 60733 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000120477s
	[INFO] 10.244.0.20:34909 - 42487 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000068621s
	[INFO] 10.244.0.20:34909 - 31874 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057394s
	[INFO] 10.244.0.20:34909 - 13788 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054117s
	[INFO] 10.244.0.20:34909 - 6549 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051197s
	[INFO] 10.244.0.20:34909 - 4644 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064603s
	[INFO] 10.244.0.20:34909 - 56885 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058973s
	[INFO] 10.244.0.20:57201 - 16169 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000178388s
	[INFO] 10.244.0.20:59552 - 54214 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000188326s
	[INFO] 10.244.0.20:59552 - 7076 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066726s
	[INFO] 10.244.0.20:57201 - 48302 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053174s
	[INFO] 10.244.0.20:57201 - 24270 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042552s
	[INFO] 10.244.0.20:59552 - 29538 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082299s
	[INFO] 10.244.0.20:59552 - 36425 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000192845s
	[INFO] 10.244.0.20:59552 - 53723 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000349523s
	[INFO] 10.244.0.20:57201 - 43093 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000092543s
	[INFO] 10.244.0.20:57201 - 60283 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00026043s
	[INFO] 10.244.0.20:59552 - 65427 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000100959s
	[INFO] 10.244.0.20:59552 - 6694 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000188822s
	[INFO] 10.244.0.20:57201 - 24145 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000109508s
	[INFO] 10.244.0.20:57201 - 8067 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000089735s
	
	
	==> describe nodes <==
	Name:               addons-246818
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-246818
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=addons-246818
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T11_32_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-246818
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-246818"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 11:32:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-246818
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 11:48:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 11:46:52 +0000   Mon, 07 Oct 2024 11:32:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 11:46:52 +0000   Mon, 07 Oct 2024 11:32:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 11:46:52 +0000   Mon, 07 Oct 2024 11:32:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 11:46:52 +0000   Mon, 07 Oct 2024 11:32:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    addons-246818
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a7e71aa8d4d4e109baa99d216d2d35a
	  System UUID:                5a7e71aa-8d4d-4e10-9baa-99d216d2d35a
	  Boot ID:                    1e1e4db1-e3af-4cfb-96cf-4a407d094dcb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-69v2g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 coredns-7c65d6cfc9-9n6rn                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     16m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 csi-hostpathplugin-d8rpq                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-addons-246818                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-246818                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-246818                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-l8kql                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-246818                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-84c5f94fbc-q6j6p                               100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         16m
	  kube-system                 snapshot-controller-56fcc65765-q9hxr                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 snapshot-controller-56fcc65765-q9tkd                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  local-path-storage          helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  local-path-storage          local-path-provisioner-86d989889c-6kwqv                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node addons-246818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node addons-246818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node addons-246818 status is now: NodeHasSufficientPID
	  Normal  NodeReady                16m   kubelet          Node addons-246818 status is now: NodeReady
	  Normal  RegisteredNode           16m   node-controller  Node addons-246818 event: Registered Node addons-246818 in Controller
	
	
	==> dmesg <==
	[  +6.987853] systemd-fstab-generator[1189]: Ignoring "noauto" option for root device
	[  +0.080762] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.824342] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.804390] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.058503] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.053847] kauditd_printk_skb: 81 callbacks suppressed
	[  +6.458158] kauditd_printk_skb: 78 callbacks suppressed
	[  +8.783756] kauditd_printk_skb: 22 callbacks suppressed
	[Oct 7 11:33] kauditd_printk_skb: 32 callbacks suppressed
	[ +42.426579] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.667940] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.940260] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 7 11:34] kauditd_printk_skb: 2 callbacks suppressed
	[ +48.225055] kauditd_printk_skb: 15 callbacks suppressed
	[Oct 7 11:35] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.972304] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 7 11:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.308875] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.325093] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.739676] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.143132] kauditd_printk_skb: 20 callbacks suppressed
	[Oct 7 11:45] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.352080] kauditd_printk_skb: 31 callbacks suppressed
	[Oct 7 11:46] kauditd_printk_skb: 1 callbacks suppressed
	[ +17.190014] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] <==
	{"level":"info","ts":"2024-10-07T11:33:57.598168Z","caller":"traceutil/trace.go:171","msg":"trace[1610592843] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1069; }","duration":"332.016264ms","start":"2024-10-07T11:33:57.266146Z","end":"2024-10-07T11:33:57.598162Z","steps":["trace[1610592843] 'agreement among raft nodes before linearized reading'  (duration: 331.93248ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.598655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.659891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-10-07T11:33:57.598711Z","caller":"traceutil/trace.go:171","msg":"trace[1734221806] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1069; }","duration":"138.717909ms","start":"2024-10-07T11:33:57.459985Z","end":"2024-10-07T11:33:57.598703Z","steps":["trace[1734221806] 'agreement among raft nodes before linearized reading'  (duration: 138.621511ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.598843Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.683257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:33:57.598900Z","caller":"traceutil/trace.go:171","msg":"trace[1418508135] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"147.743392ms","start":"2024-10-07T11:33:57.451149Z","end":"2024-10-07T11:33:57.598892Z","steps":["trace[1418508135] 'agreement among raft nodes before linearized reading'  (duration: 147.663333ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.598872Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.22319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:33:57.598979Z","caller":"traceutil/trace.go:171","msg":"trace[1080542174] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"304.328661ms","start":"2024-10-07T11:33:57.294641Z","end":"2024-10-07T11:33:57.598970Z","steps":["trace[1080542174] 'agreement among raft nodes before linearized reading'  (duration: 304.214885ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.599028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:33:57.294615Z","time spent":"304.404536ms","remote":"127.0.0.1:46982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-07T11:34:55.174206Z","caller":"traceutil/trace.go:171","msg":"trace[2115876705] linearizableReadLoop","detail":"{readStateIndex:1224; appliedIndex:1223; }","duration":"118.016178ms","start":"2024-10-07T11:34:55.056148Z","end":"2024-10-07T11:34:55.174164Z","steps":["trace[2115876705] 'read index received'  (duration: 117.833312ms)","trace[2115876705] 'applied index is now lower than readState.Index'  (duration: 181.97µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T11:34:55.174576Z","caller":"traceutil/trace.go:171","msg":"trace[695574193] transaction","detail":"{read_only:false; response_revision:1176; number_of_response:1; }","duration":"175.99018ms","start":"2024-10-07T11:34:54.998568Z","end":"2024-10-07T11:34:55.174558Z","steps":["trace[695574193] 'process raft request'  (duration: 175.463941ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:34:55.174726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.52903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:34:55.175588Z","caller":"traceutil/trace.go:171","msg":"trace[1717354007] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1176; }","duration":"119.452051ms","start":"2024-10-07T11:34:55.056121Z","end":"2024-10-07T11:34:55.175573Z","steps":["trace[1717354007] 'agreement among raft nodes before linearized reading'  (duration: 118.512449ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T11:42:12.102784Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1443}
	{"level":"info","ts":"2024-10-07T11:42:12.139478Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1443,"took":"35.711987ms","hash":2488319999,"current-db-size-bytes":5902336,"current-db-size":"5.9 MB","current-db-size-in-use-bytes":2895872,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-10-07T11:42:12.139591Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2488319999,"revision":1443,"compact-revision":-1}
	{"level":"info","ts":"2024-10-07T11:43:18.110906Z","caller":"traceutil/trace.go:171","msg":"trace[325646834] linearizableReadLoop","detail":"{readStateIndex:2187; appliedIndex:2186; }","duration":"261.537214ms","start":"2024-10-07T11:43:17.849341Z","end":"2024-10-07T11:43:18.110878Z","steps":["trace[325646834] 'read index received'  (duration: 261.404239ms)","trace[325646834] 'applied index is now lower than readState.Index'  (duration: 132.582µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T11:43:18.111051Z","caller":"traceutil/trace.go:171","msg":"trace[977940061] transaction","detail":"{read_only:false; response_revision:2029; number_of_response:1; }","duration":"389.974345ms","start":"2024-10-07T11:43:17.721067Z","end":"2024-10-07T11:43:18.111041Z","steps":["trace[977940061] 'process raft request'  (duration: 389.72661ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:43:18.111247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.449824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"warn","ts":"2024-10-07T11:43:18.111341Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:43:17.721046Z","time spent":"390.024254ms","remote":"127.0.0.1:47046","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-khdavvvmsdaoutnun36u7rbvlu\" mod_revision:1961 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-khdavvvmsdaoutnun36u7rbvlu\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-khdavvvmsdaoutnun36u7rbvlu\" > >"}
	{"level":"info","ts":"2024-10-07T11:43:18.111353Z","caller":"traceutil/trace.go:171","msg":"trace[2088660386] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2029; }","duration":"175.589035ms","start":"2024-10-07T11:43:17.935755Z","end":"2024-10-07T11:43:18.111344Z","steps":["trace[2088660386] 'agreement among raft nodes before linearized reading'  (duration: 175.35089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:43:18.111578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.227097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-10-07T11:43:18.111600Z","caller":"traceutil/trace.go:171","msg":"trace[668771085] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:2029; }","duration":"262.260378ms","start":"2024-10-07T11:43:17.849335Z","end":"2024-10-07T11:43:18.111595Z","steps":["trace[668771085] 'agreement among raft nodes before linearized reading'  (duration: 262.135923ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T11:47:12.110298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1859}
	{"level":"info","ts":"2024-10-07T11:47:12.131252Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1859,"took":"20.392946ms","hash":1211739089,"current-db-size-bytes":5902336,"current-db-size":"5.9 MB","current-db-size-in-use-bytes":4247552,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2024-10-07T11:47:12.131383Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1211739089,"revision":1859,"compact-revision":1443}
	
	
	==> kernel <==
	 11:48:56 up 17 min,  0 users,  load average: 0.07, 0.29, 0.35
	Linux addons-246818 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] <==
	 > logger="UnhandledError"
	E1007 11:34:07.194787       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.180.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.180.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.180.136:443: connect: connection refused" logger="UnhandledError"
	W1007 11:34:08.194346       1 handler_proxy.go:99] no RequestInfo found in the context
	W1007 11:34:08.194391       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 11:34:08.194399       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1007 11:34:08.194468       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1007 11:34:08.195529       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 11:34:08.195605       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1007 11:34:12.209010       1 handler_proxy.go:99] no RequestInfo found in the context
	E1007 11:34:12.209502       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1007 11:34:12.210058       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.180.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.180.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.180.136:443: i/o timeout" logger="UnhandledError"
	I1007 11:34:12.229890       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1007 11:43:13.404446       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.123.192"}
	I1007 11:43:31.610761       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1007 11:43:31.793061       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.111.126"}
	I1007 11:43:35.415346       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1007 11:43:36.447558       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1007 11:45:52.143032       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.225.248"}
	E1007 11:48:56.779471       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] <==
	W1007 11:45:48.645495       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:45:48.645655       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 11:45:51.930930       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.003567ms"
	I1007 11:45:51.970786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.784177ms"
	I1007 11:45:51.971319       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="210.354µs"
	I1007 11:45:56.587457       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I1007 11:45:56.592572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="8.497µs"
	I1007 11:45:56.597237       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I1007 11:46:06.624440       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I1007 11:46:09.171120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-5b584cc74" duration="6.64µs"
	W1007 11:46:46.168384       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:46:46.168483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 11:46:52.281354       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-246818"
	I1007 11:46:57.214329       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="138.753µs"
	I1007 11:47:09.537302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="258.396µs"
	W1007 11:47:36.913772       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:47:36.913991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1007 11:48:07.905958       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:48:07.906063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 11:48:41.466077       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="13.727µs"
	I1007 11:48:43.540394       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="142.853µs"
	W1007 11:48:47.070569       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:48:47.070639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E1007 11:48:50.546791       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	I1007 11:48:54.543212       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="467.425µs"
	
	
	==> kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 11:32:23.243441       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 11:32:23.257157       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	E1007 11:32:23.257303       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:32:23.344187       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 11:32:23.344232       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 11:32:23.344291       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:32:23.348157       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:32:23.349642       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:32:23.349675       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:32:23.353061       1 config.go:199] "Starting service config controller"
	I1007 11:32:23.353107       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:32:23.353132       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:32:23.353136       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:32:23.353652       1 config.go:328] "Starting node config controller"
	I1007 11:32:23.353680       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:32:23.453423       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 11:32:23.453488       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:32:23.453719       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] <==
	W1007 11:32:13.856022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 11:32:13.856054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.719501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 11:32:14.719572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.721026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 11:32:14.721098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.734053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 11:32:14.734189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.747594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 11:32:14.747648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.853414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 11:32:14.853573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.943033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 11:32:14.943144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.979068       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 11:32:14.979173       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 11:32:15.003337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 11:32:15.003472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:15.093807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 11:32:15.093884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:15.121824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 11:32:15.121876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:15.145698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 11:32:15.145757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 11:32:17.639557       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 11:48:16 addons-246818 kubelet[1196]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 11:48:16 addons-246818 kubelet[1196]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 11:48:16 addons-246818 kubelet[1196]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 11:48:16 addons-246818 kubelet[1196]: E1007 11:48:16.945126    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301696944706831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:48:16 addons-246818 kubelet[1196]: E1007 11:48:16.945248    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301696944706831,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:48:26 addons-246818 kubelet[1196]: E1007 11:48:26.947544    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301706947145624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:48:26 addons-246818 kubelet[1196]: E1007 11:48:26.947599    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301706947145624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:48:27 addons-246818 kubelet[1196]: E1007 11:48:27.050333    1196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee95e639-975d-4172-9950-2f0bcdf275d7" containerName="cloud-spanner-emulator"
	Oct 07 11:48:27 addons-246818 kubelet[1196]: E1007 11:48:27.050531    1196 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b940f1da-e470-4328-ad14-6d76d655576f" containerName="controller"
	Oct 07 11:48:27 addons-246818 kubelet[1196]: I1007 11:48:27.050740    1196 memory_manager.go:354] "RemoveStaleState removing state" podUID="b940f1da-e470-4328-ad14-6d76d655576f" containerName="controller"
	Oct 07 11:48:27 addons-246818 kubelet[1196]: I1007 11:48:27.050857    1196 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee95e639-975d-4172-9950-2f0bcdf275d7" containerName="cloud-spanner-emulator"
	Oct 07 11:48:27 addons-246818 kubelet[1196]: I1007 11:48:27.087447    1196 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flzgd\" (UniqueName: \"kubernetes.io/projected/a3d97bdb-415a-408d-82c3-8f66f80c6a2d-kube-api-access-flzgd\") pod \"helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6\" (UID: \"a3d97bdb-415a-408d-82c3-8f66f80c6a2d\") " pod="local-path-storage/helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6"
	Oct 07 11:48:27 addons-246818 kubelet[1196]: I1007 11:48:27.087839    1196 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/a3d97bdb-415a-408d-82c3-8f66f80c6a2d-data\") pod \"helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6\" (UID: \"a3d97bdb-415a-408d-82c3-8f66f80c6a2d\") " pod="local-path-storage/helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6"
	Oct 07 11:48:27 addons-246818 kubelet[1196]: I1007 11:48:27.087990    1196 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/a3d97bdb-415a-408d-82c3-8f66f80c6a2d-script\") pod \"helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6\" (UID: \"a3d97bdb-415a-408d-82c3-8f66f80c6a2d\") " pod="local-path-storage/helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6"
	Oct 07 11:48:28 addons-246818 kubelet[1196]: E1007 11:48:28.859415    1196 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Oct 07 11:48:28 addons-246818 kubelet[1196]: E1007 11:48:28.859538    1196 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/kicbase/echo-server:1.0"
	Oct 07 11:48:28 addons-246818 kubelet[1196]: E1007 11:48:28.860040    1196 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:hello-world-app,Image:docker.io/kicbase/echo-server:1.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-khkjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start fai
led in pod hello-world-app-55bf9c44b4-69v2g_default(e73fb85b-64fc-40a4-983f-7278e1c3e3b7): ErrImagePull: reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 07 11:48:28 addons-246818 kubelet[1196]: E1007 11:48:28.863584    1196 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ErrImagePull: \"reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/hello-world-app-55bf9c44b4-69v2g" podUID="e73fb85b-64fc-40a4-983f-7278e1c3e3b7"
	Oct 07 11:48:36 addons-246818 kubelet[1196]: E1007 11:48:36.951491    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301716950777688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:48:36 addons-246818 kubelet[1196]: E1007 11:48:36.951539    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301716950777688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:48:43 addons-246818 kubelet[1196]: E1007 11:48:43.527515    1196 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kicbase/echo-server:1.0\\\"\"" pod="default/hello-world-app-55bf9c44b4-69v2g" podUID="e73fb85b-64fc-40a4-983f-7278e1c3e3b7"
	Oct 07 11:48:46 addons-246818 kubelet[1196]: E1007 11:48:46.954161    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301726953661002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:48:46 addons-246818 kubelet[1196]: E1007 11:48:46.954635    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301726953661002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:48:56 addons-246818 kubelet[1196]: E1007 11:48:56.957136    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301736956666480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:48:56 addons-246818 kubelet[1196]: E1007 11:48:56.957161    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301736956666480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [64b3fe56b0b4d3fe117735565f2a0aeab451e5355bb33873142df1501d850d77] <==
	I1007 11:32:29.154950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 11:32:29.177899       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 11:32:29.177961       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 11:32:29.210127       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 11:32:29.210330       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-246818_c20493a4-b4c1-4d82-aa60-bc8f32f150cc!
	I1007 11:32:29.211374       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd5fb25e-787a-4fbd-bcb7-131f507b7555", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-246818_c20493a4-b4c1-4d82-aa60-bc8f32f150cc became leader
	I1007 11:32:29.318137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-246818_c20493a4-b4c1-4d82-aa60-bc8f32f150cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-246818 -n addons-246818
helpers_test.go:261: (dbg) Run:  kubectl --context addons-246818 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-246818 describe pod hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-246818 describe pod hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6: exit status 1 (89.470734ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-69v2g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-246818/192.168.39.141
	Start Time:       Mon, 07 Oct 2024 11:45:51 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:           10.244.0.28
	Controlled By:  ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-khkjd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-khkjd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m6s                default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-69v2g to addons-246818
	  Warning  Failed     29s (x2 over 2m1s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     29s (x2 over 2m1s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    14s (x2 over 2m)    kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     14s (x2 over 2m)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    3s (x3 over 3m5s)   kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-246818/192.168.39.141
	Start Time:       Mon, 07 Oct 2024 11:43:36 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fs7ff (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-fs7ff:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m21s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-246818
	  Warning  Failed     4m50s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m32s (x3 over 4m50s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m32s (x2 over 3m48s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    115s (x5 over 4m49s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     115s (x5 over 4m49s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    100s (x4 over 5m20s)   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42qhr (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-42qhr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-246818 describe pod hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (334.06s)

                                                
                                    
x
+
TestAddons/parallel/CSI (387.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1007 11:43:19.335045  384271 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1007 11:43:19.345647  384271 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1007 11:43:19.345685  384271 kapi.go:107] duration metric: took 10.670151ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 10.68286ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-246818 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-246818 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7dd2a563-8ddd-4a27-b356-1d2368c56e79] Pending
helpers_test.go:344: "task-pv-pod" [7dd2a563-8ddd-4a27-b356-1d2368c56e79] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:506: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:506: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-246818 -n addons-246818
addons_test.go:506: TestAddons/parallel/CSI: showing logs for failed pods as of 2024-10-07 11:49:36.920656872 +0000 UTC m=+1095.539240880
addons_test.go:506: (dbg) Run:  kubectl --context addons-246818 describe po task-pv-pod -n default
addons_test.go:506: (dbg) kubectl --context addons-246818 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-246818/192.168.39.141
Start Time:       Mon, 07 Oct 2024 11:43:36 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.26
IPs:
IP:  10.244.0.26
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fs7ff (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-fs7ff:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/task-pv-pod to addons-246818
Warning  Failed     5m29s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    2m19s (x4 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     37s (x4 over 5m29s)    kubelet            Error: ErrImagePull
Warning  Failed     37s (x3 over 4m27s)    kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    9s (x7 over 5m28s)     kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     9s (x7 over 5m28s)     kubelet            Error: ImagePullBackOff
addons_test.go:506: (dbg) Run:  kubectl --context addons-246818 logs task-pv-pod -n default
addons_test.go:506: (dbg) Non-zero exit: kubectl --context addons-246818 logs task-pv-pod -n default: exit status 1 (73.188655ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:506: kubectl --context addons-246818 logs task-pv-pod -n default: exit status 1
addons_test.go:507: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-246818 -n addons-246818
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-246818 logs -n 25: (1.330622686s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube             | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| delete  | -p download-only-257663              | download-only-257663 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| delete  | -p download-only-243020              | download-only-243020 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| delete  | -p download-only-257663              | download-only-257663 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| start   | --download-only -p                   | binary-mirror-827339 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | binary-mirror-827339                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:38787               |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-827339              | binary-mirror-827339 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| addons  | enable dashboard -p                  | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | addons-246818                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | addons-246818                        |                      |         |         |                     |                     |
	| start   | -p addons-246818 --wait=true         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:34 UTC |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=kvm2                        |                      |         |         |                     |                     |
	|         | --container-runtime=crio             |                      |         |         |                     |                     |
	|         | --addons=ingress                     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:34 UTC | 07 Oct 24 11:34 UTC |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | gcp-auth --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | -p addons-246818                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | -p addons-246818                     |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | yakd --alsologtostderr -v=1          |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | headlamp --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| ip      | addons-246818 ip                     | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-246818 addons                 | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC | 07 Oct 24 11:43 UTC |
	|         | disable inspektor-gadget             |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| ssh     | addons-246818 ssh curl -s            | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:43 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                      |         |         |                     |                     |
	|         | nginx.example.com'                   |                      |         |         |                     |                     |
	| ip      | addons-246818 ip                     | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:45 UTC |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:45 UTC |
	|         | ingress-dns --alsologtostderr        |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:45 UTC | 07 Oct 24 11:46 UTC |
	|         | ingress --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-246818 addons                 | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:46 UTC | 07 Oct 24 11:46 UTC |
	|         | disable cloud-spanner                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-246818 addons disable         | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:48 UTC |                     |
	|         | storage-provisioner-rancher          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-246818 addons                 | addons-246818        | jenkins | v1.34.0 | 07 Oct 24 11:48 UTC | 07 Oct 24 11:48 UTC |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:31:34
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:31:34.116156  384891 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:31:34.116270  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:31:34.116277  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:31:34.116282  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:31:34.116469  384891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 11:31:34.117144  384891 out.go:352] Setting JSON to false
	I1007 11:31:34.118102  384891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4440,"bootTime":1728296254,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:31:34.118176  384891 start.go:139] virtualization: kvm guest
	I1007 11:31:34.120408  384891 out.go:177] * [addons-246818] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:31:34.122258  384891 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 11:31:34.122285  384891 notify.go:220] Checking for updates...
	I1007 11:31:34.124959  384891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:31:34.126627  384891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:31:34.128213  384891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:31:34.129872  384891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:31:34.131237  384891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:31:34.132940  384891 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:31:34.166945  384891 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 11:31:34.168406  384891 start.go:297] selected driver: kvm2
	I1007 11:31:34.168430  384891 start.go:901] validating driver "kvm2" against <nil>
	I1007 11:31:34.168446  384891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:31:34.169281  384891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:31:34.169397  384891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:31:34.186640  384891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:31:34.186710  384891 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:31:34.186981  384891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:31:34.187031  384891 cni.go:84] Creating CNI manager for ""
	I1007 11:31:34.187088  384891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:31:34.187116  384891 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 11:31:34.187194  384891 start.go:340] cluster config:
	{Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:31:34.187319  384891 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:31:34.189414  384891 out.go:177] * Starting "addons-246818" primary control-plane node in "addons-246818" cluster
	I1007 11:31:34.191135  384891 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:31:34.191199  384891 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 11:31:34.191215  384891 cache.go:56] Caching tarball of preloaded images
	I1007 11:31:34.191343  384891 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 11:31:34.191358  384891 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 11:31:34.191753  384891 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/config.json ...
	I1007 11:31:34.191788  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/config.json: {Name:mk8ac1a8a8e3adadfd093d5da0627d5b3cabf0d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:31:34.191973  384891 start.go:360] acquireMachinesLock for addons-246818: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 11:31:34.192039  384891 start.go:364] duration metric: took 47.555µs to acquireMachinesLock for "addons-246818"
	I1007 11:31:34.192065  384891 start.go:93] Provisioning new machine with config: &{Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:31:34.192185  384891 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 11:31:34.194346  384891 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1007 11:31:34.194555  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:31:34.194629  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:31:34.210789  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I1007 11:31:34.211351  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:31:34.211942  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:31:34.211966  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:31:34.212395  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:31:34.212604  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:34.212831  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:34.213029  384891 start.go:159] libmachine.API.Create for "addons-246818" (driver="kvm2")
	I1007 11:31:34.213068  384891 client.go:168] LocalClient.Create starting
	I1007 11:31:34.213129  384891 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 11:31:34.455639  384891 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 11:31:34.570226  384891 main.go:141] libmachine: Running pre-create checks...
	I1007 11:31:34.570260  384891 main.go:141] libmachine: (addons-246818) Calling .PreCreateCheck
	I1007 11:31:34.570842  384891 main.go:141] libmachine: (addons-246818) Calling .GetConfigRaw
	I1007 11:31:34.571323  384891 main.go:141] libmachine: Creating machine...
	I1007 11:31:34.571338  384891 main.go:141] libmachine: (addons-246818) Calling .Create
	I1007 11:31:34.571502  384891 main.go:141] libmachine: (addons-246818) Creating KVM machine...
	I1007 11:31:34.572696  384891 main.go:141] libmachine: (addons-246818) DBG | found existing default KVM network
	I1007 11:31:34.573525  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:34.573329  384913 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000115200}
	I1007 11:31:34.573556  384891 main.go:141] libmachine: (addons-246818) DBG | created network xml: 
	I1007 11:31:34.573571  384891 main.go:141] libmachine: (addons-246818) DBG | <network>
	I1007 11:31:34.573580  384891 main.go:141] libmachine: (addons-246818) DBG |   <name>mk-addons-246818</name>
	I1007 11:31:34.573590  384891 main.go:141] libmachine: (addons-246818) DBG |   <dns enable='no'/>
	I1007 11:31:34.573600  384891 main.go:141] libmachine: (addons-246818) DBG |   
	I1007 11:31:34.573610  384891 main.go:141] libmachine: (addons-246818) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 11:31:34.573622  384891 main.go:141] libmachine: (addons-246818) DBG |     <dhcp>
	I1007 11:31:34.573632  384891 main.go:141] libmachine: (addons-246818) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 11:31:34.573640  384891 main.go:141] libmachine: (addons-246818) DBG |     </dhcp>
	I1007 11:31:34.573647  384891 main.go:141] libmachine: (addons-246818) DBG |   </ip>
	I1007 11:31:34.573659  384891 main.go:141] libmachine: (addons-246818) DBG |   
	I1007 11:31:34.573670  384891 main.go:141] libmachine: (addons-246818) DBG | </network>
	I1007 11:31:34.573677  384891 main.go:141] libmachine: (addons-246818) DBG | 
	I1007 11:31:34.579638  384891 main.go:141] libmachine: (addons-246818) DBG | trying to create private KVM network mk-addons-246818 192.168.39.0/24...
	I1007 11:31:34.649044  384891 main.go:141] libmachine: (addons-246818) DBG | private KVM network mk-addons-246818 192.168.39.0/24 created
	I1007 11:31:34.649094  384891 main.go:141] libmachine: (addons-246818) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818 ...
	I1007 11:31:34.649118  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:34.648912  384913 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:31:34.649140  384891 main.go:141] libmachine: (addons-246818) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 11:31:34.649156  384891 main.go:141] libmachine: (addons-246818) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 11:31:34.924379  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:34.924203  384913 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa...
	I1007 11:31:35.127437  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:35.127261  384913 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/addons-246818.rawdisk...
	I1007 11:31:35.127475  384891 main.go:141] libmachine: (addons-246818) DBG | Writing magic tar header
	I1007 11:31:35.127490  384891 main.go:141] libmachine: (addons-246818) DBG | Writing SSH key tar header
	I1007 11:31:35.127501  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:35.127388  384913 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818 ...
	I1007 11:31:35.127525  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818
	I1007 11:31:35.127537  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 11:31:35.127548  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818 (perms=drwx------)
	I1007 11:31:35.127558  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 11:31:35.127564  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 11:31:35.127603  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:31:35.127639  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 11:31:35.127648  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 11:31:35.127657  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 11:31:35.127665  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home/jenkins
	I1007 11:31:35.127678  384891 main.go:141] libmachine: (addons-246818) DBG | Checking permissions on dir: /home
	I1007 11:31:35.127691  384891 main.go:141] libmachine: (addons-246818) DBG | Skipping /home - not owner
	I1007 11:31:35.127708  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 11:31:35.127726  384891 main.go:141] libmachine: (addons-246818) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 11:31:35.127740  384891 main.go:141] libmachine: (addons-246818) Creating domain...
	I1007 11:31:35.128819  384891 main.go:141] libmachine: (addons-246818) define libvirt domain using xml: 
	I1007 11:31:35.128847  384891 main.go:141] libmachine: (addons-246818) <domain type='kvm'>
	I1007 11:31:35.128859  384891 main.go:141] libmachine: (addons-246818)   <name>addons-246818</name>
	I1007 11:31:35.128867  384891 main.go:141] libmachine: (addons-246818)   <memory unit='MiB'>4000</memory>
	I1007 11:31:35.128910  384891 main.go:141] libmachine: (addons-246818)   <vcpu>2</vcpu>
	I1007 11:31:35.128933  384891 main.go:141] libmachine: (addons-246818)   <features>
	I1007 11:31:35.128941  384891 main.go:141] libmachine: (addons-246818)     <acpi/>
	I1007 11:31:35.128948  384891 main.go:141] libmachine: (addons-246818)     <apic/>
	I1007 11:31:35.128969  384891 main.go:141] libmachine: (addons-246818)     <pae/>
	I1007 11:31:35.128980  384891 main.go:141] libmachine: (addons-246818)     
	I1007 11:31:35.128988  384891 main.go:141] libmachine: (addons-246818)   </features>
	I1007 11:31:35.128998  384891 main.go:141] libmachine: (addons-246818)   <cpu mode='host-passthrough'>
	I1007 11:31:35.129006  384891 main.go:141] libmachine: (addons-246818)   
	I1007 11:31:35.129016  384891 main.go:141] libmachine: (addons-246818)   </cpu>
	I1007 11:31:35.129046  384891 main.go:141] libmachine: (addons-246818)   <os>
	I1007 11:31:35.129077  384891 main.go:141] libmachine: (addons-246818)     <type>hvm</type>
	I1007 11:31:35.129084  384891 main.go:141] libmachine: (addons-246818)     <boot dev='cdrom'/>
	I1007 11:31:35.129095  384891 main.go:141] libmachine: (addons-246818)     <boot dev='hd'/>
	I1007 11:31:35.129107  384891 main.go:141] libmachine: (addons-246818)     <bootmenu enable='no'/>
	I1007 11:31:35.129117  384891 main.go:141] libmachine: (addons-246818)   </os>
	I1007 11:31:35.129125  384891 main.go:141] libmachine: (addons-246818)   <devices>
	I1007 11:31:35.129140  384891 main.go:141] libmachine: (addons-246818)     <disk type='file' device='cdrom'>
	I1007 11:31:35.129155  384891 main.go:141] libmachine: (addons-246818)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/boot2docker.iso'/>
	I1007 11:31:35.129167  384891 main.go:141] libmachine: (addons-246818)       <target dev='hdc' bus='scsi'/>
	I1007 11:31:35.129174  384891 main.go:141] libmachine: (addons-246818)       <readonly/>
	I1007 11:31:35.129180  384891 main.go:141] libmachine: (addons-246818)     </disk>
	I1007 11:31:35.129194  384891 main.go:141] libmachine: (addons-246818)     <disk type='file' device='disk'>
	I1007 11:31:35.129223  384891 main.go:141] libmachine: (addons-246818)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 11:31:35.129239  384891 main.go:141] libmachine: (addons-246818)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/addons-246818.rawdisk'/>
	I1007 11:31:35.129249  384891 main.go:141] libmachine: (addons-246818)       <target dev='hda' bus='virtio'/>
	I1007 11:31:35.129258  384891 main.go:141] libmachine: (addons-246818)     </disk>
	I1007 11:31:35.129263  384891 main.go:141] libmachine: (addons-246818)     <interface type='network'>
	I1007 11:31:35.129278  384891 main.go:141] libmachine: (addons-246818)       <source network='mk-addons-246818'/>
	I1007 11:31:35.129290  384891 main.go:141] libmachine: (addons-246818)       <model type='virtio'/>
	I1007 11:31:35.129301  384891 main.go:141] libmachine: (addons-246818)     </interface>
	I1007 11:31:35.129312  384891 main.go:141] libmachine: (addons-246818)     <interface type='network'>
	I1007 11:31:35.129322  384891 main.go:141] libmachine: (addons-246818)       <source network='default'/>
	I1007 11:31:35.129335  384891 main.go:141] libmachine: (addons-246818)       <model type='virtio'/>
	I1007 11:31:35.129345  384891 main.go:141] libmachine: (addons-246818)     </interface>
	I1007 11:31:35.129351  384891 main.go:141] libmachine: (addons-246818)     <serial type='pty'>
	I1007 11:31:35.129363  384891 main.go:141] libmachine: (addons-246818)       <target port='0'/>
	I1007 11:31:35.129375  384891 main.go:141] libmachine: (addons-246818)     </serial>
	I1007 11:31:35.129385  384891 main.go:141] libmachine: (addons-246818)     <console type='pty'>
	I1007 11:31:35.129392  384891 main.go:141] libmachine: (addons-246818)       <target type='serial' port='0'/>
	I1007 11:31:35.129398  384891 main.go:141] libmachine: (addons-246818)     </console>
	I1007 11:31:35.129404  384891 main.go:141] libmachine: (addons-246818)     <rng model='virtio'>
	I1007 11:31:35.129410  384891 main.go:141] libmachine: (addons-246818)       <backend model='random'>/dev/random</backend>
	I1007 11:31:35.129416  384891 main.go:141] libmachine: (addons-246818)     </rng>
	I1007 11:31:35.129420  384891 main.go:141] libmachine: (addons-246818)     
	I1007 11:31:35.129426  384891 main.go:141] libmachine: (addons-246818)     
	I1007 11:31:35.129431  384891 main.go:141] libmachine: (addons-246818)   </devices>
	I1007 11:31:35.129437  384891 main.go:141] libmachine: (addons-246818) </domain>
	I1007 11:31:35.129452  384891 main.go:141] libmachine: (addons-246818) 
	I1007 11:31:35.136045  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:59:de:27 in network default
	I1007 11:31:35.136621  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:35.136638  384891 main.go:141] libmachine: (addons-246818) Ensuring networks are active...
	I1007 11:31:35.137397  384891 main.go:141] libmachine: (addons-246818) Ensuring network default is active
	I1007 11:31:35.137759  384891 main.go:141] libmachine: (addons-246818) Ensuring network mk-addons-246818 is active
	I1007 11:31:35.139309  384891 main.go:141] libmachine: (addons-246818) Getting domain xml...
	I1007 11:31:35.140007  384891 main.go:141] libmachine: (addons-246818) Creating domain...
	I1007 11:31:36.562781  384891 main.go:141] libmachine: (addons-246818) Waiting to get IP...
	I1007 11:31:36.563649  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:36.564039  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:36.564102  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:36.564034  384913 retry.go:31] will retry after 196.803567ms: waiting for machine to come up
	I1007 11:31:36.762559  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:36.762980  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:36.763006  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:36.762928  384913 retry.go:31] will retry after 309.609813ms: waiting for machine to come up
	I1007 11:31:37.074568  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:37.075066  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:37.075099  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:37.075019  384913 retry.go:31] will retry after 357.050229ms: waiting for machine to come up
	I1007 11:31:37.433468  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:37.433865  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:37.433888  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:37.433824  384913 retry.go:31] will retry after 404.967007ms: waiting for machine to come up
	I1007 11:31:37.840487  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:37.840912  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:37.840944  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:37.840852  384913 retry.go:31] will retry after 505.430509ms: waiting for machine to come up
	I1007 11:31:38.347450  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:38.347839  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:38.347868  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:38.347768  384913 retry.go:31] will retry after 847.255626ms: waiting for machine to come up
	I1007 11:31:39.196471  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:39.196947  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:39.196980  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:39.196886  384913 retry.go:31] will retry after 920.58458ms: waiting for machine to come up
	I1007 11:31:40.119476  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:40.119814  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:40.119836  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:40.119790  384913 retry.go:31] will retry after 948.651988ms: waiting for machine to come up
	I1007 11:31:41.070215  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:41.070708  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:41.070731  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:41.070668  384913 retry.go:31] will retry after 1.382953489s: waiting for machine to come up
	I1007 11:31:42.455452  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:42.455916  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:42.455941  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:42.455847  384913 retry.go:31] will retry after 2.262578278s: waiting for machine to come up
	I1007 11:31:44.719656  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:44.720338  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:44.720368  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:44.720277  384913 retry.go:31] will retry after 2.289996901s: waiting for machine to come up
	I1007 11:31:47.012350  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:47.012859  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:47.012889  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:47.012809  384913 retry.go:31] will retry after 3.343133276s: waiting for machine to come up
	I1007 11:31:50.358204  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:50.358539  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:50.358566  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:50.358487  384913 retry.go:31] will retry after 4.335427182s: waiting for machine to come up
	I1007 11:31:54.695193  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:54.695591  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find current IP address of domain addons-246818 in network mk-addons-246818
	I1007 11:31:54.695617  384891 main.go:141] libmachine: (addons-246818) DBG | I1007 11:31:54.695544  384913 retry.go:31] will retry after 3.558303483s: waiting for machine to come up
	I1007 11:31:58.258305  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.258838  384891 main.go:141] libmachine: (addons-246818) Found IP for machine: 192.168.39.141
	I1007 11:31:58.258873  384891 main.go:141] libmachine: (addons-246818) Reserving static IP address...
	I1007 11:31:58.258887  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has current primary IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.259281  384891 main.go:141] libmachine: (addons-246818) DBG | unable to find host DHCP lease matching {name: "addons-246818", mac: "52:54:00:b1:d7:db", ip: "192.168.39.141"} in network mk-addons-246818
	I1007 11:31:58.385299  384891 main.go:141] libmachine: (addons-246818) Reserved static IP address: 192.168.39.141
	I1007 11:31:58.385331  384891 main.go:141] libmachine: (addons-246818) DBG | Getting to WaitForSSH function...
	I1007 11:31:58.385340  384891 main.go:141] libmachine: (addons-246818) Waiting for SSH to be available...
	I1007 11:31:58.387663  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.388108  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.388140  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.388409  384891 main.go:141] libmachine: (addons-246818) DBG | Using SSH client type: external
	I1007 11:31:58.388428  384891 main.go:141] libmachine: (addons-246818) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa (-rw-------)
	I1007 11:31:58.388460  384891 main.go:141] libmachine: (addons-246818) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 11:31:58.388472  384891 main.go:141] libmachine: (addons-246818) DBG | About to run SSH command:
	I1007 11:31:58.388485  384891 main.go:141] libmachine: (addons-246818) DBG | exit 0
	I1007 11:31:58.523637  384891 main.go:141] libmachine: (addons-246818) DBG | SSH cmd err, output: <nil>: 
	I1007 11:31:58.523957  384891 main.go:141] libmachine: (addons-246818) KVM machine creation complete!
	I1007 11:31:58.524322  384891 main.go:141] libmachine: (addons-246818) Calling .GetConfigRaw
	I1007 11:31:58.524995  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:58.525265  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:58.525453  384891 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 11:31:58.525471  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:31:58.526983  384891 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 11:31:58.527001  384891 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 11:31:58.527007  384891 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 11:31:58.527013  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.529966  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.530364  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.530392  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.530622  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.530830  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.531010  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.531238  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.531430  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.531658  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.531672  384891 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 11:31:58.638640  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:31:58.638671  384891 main.go:141] libmachine: Detecting the provisioner...
	I1007 11:31:58.638699  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.641499  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.641868  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.641902  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.642074  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.642323  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.642499  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.642641  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.642833  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.643029  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.643040  384891 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 11:31:58.752146  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 11:31:58.752213  384891 main.go:141] libmachine: found compatible host: buildroot
	I1007 11:31:58.752223  384891 main.go:141] libmachine: Provisioning with buildroot...
	I1007 11:31:58.752233  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:58.752488  384891 buildroot.go:166] provisioning hostname "addons-246818"
	I1007 11:31:58.752521  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:58.752755  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.755321  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.755658  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.755689  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.755781  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.755930  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.756116  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.756273  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.756441  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.756677  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.756693  384891 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-246818 && echo "addons-246818" | sudo tee /etc/hostname
	I1007 11:31:58.878487  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-246818
	
	I1007 11:31:58.878522  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:58.881235  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.881595  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:58.881628  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:58.881829  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:58.882043  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.882221  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:58.882373  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:58.882547  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:58.882736  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:58.882751  384891 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-246818' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-246818/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-246818' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 11:31:59.000758  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 11:31:59.000793  384891 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 11:31:59.000860  384891 buildroot.go:174] setting up certificates
	I1007 11:31:59.000882  384891 provision.go:84] configureAuth start
	I1007 11:31:59.000901  384891 main.go:141] libmachine: (addons-246818) Calling .GetMachineName
	I1007 11:31:59.001290  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:31:59.004173  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.004729  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.004770  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.005018  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.007634  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.007984  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.008012  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.008236  384891 provision.go:143] copyHostCerts
	I1007 11:31:59.008313  384891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 11:31:59.008444  384891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 11:31:59.008531  384891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 11:31:59.008592  384891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.addons-246818 san=[127.0.0.1 192.168.39.141 addons-246818 localhost minikube]
	I1007 11:31:59.251829  384891 provision.go:177] copyRemoteCerts
	I1007 11:31:59.251901  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 11:31:59.251926  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.255073  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.255515  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.255554  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.255695  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.255927  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.256090  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.256229  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.342524  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 11:31:59.367975  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 11:31:59.393410  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 11:31:59.418593  384891 provision.go:87] duration metric: took 417.693053ms to configureAuth
	I1007 11:31:59.418624  384891 buildroot.go:189] setting minikube options for container-runtime
	I1007 11:31:59.418838  384891 config.go:182] Loaded profile config "addons-246818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:31:59.418935  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.421597  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.421932  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.421960  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.422111  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.422335  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.422530  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.422645  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.422799  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:59.423008  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:59.423028  384891 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 11:31:59.655212  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 11:31:59.655259  384891 main.go:141] libmachine: Checking connection to Docker...
	I1007 11:31:59.655271  384891 main.go:141] libmachine: (addons-246818) Calling .GetURL
	I1007 11:31:59.656909  384891 main.go:141] libmachine: (addons-246818) DBG | Using libvirt version 6000000
	I1007 11:31:59.659411  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.659775  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.659810  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.659963  384891 main.go:141] libmachine: Docker is up and running!
	I1007 11:31:59.659972  384891 main.go:141] libmachine: Reticulating splines...
	I1007 11:31:59.659979  384891 client.go:171] duration metric: took 25.446899659s to LocalClient.Create
	I1007 11:31:59.660003  384891 start.go:167] duration metric: took 25.446975437s to libmachine.API.Create "addons-246818"
	I1007 11:31:59.660014  384891 start.go:293] postStartSetup for "addons-246818" (driver="kvm2")
	I1007 11:31:59.660024  384891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 11:31:59.660041  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.660313  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 11:31:59.660341  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.662645  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.663064  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.663113  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.663225  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.663412  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.663549  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.663695  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.746681  384891 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 11:31:59.750995  384891 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 11:31:59.751029  384891 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 11:31:59.751132  384891 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 11:31:59.751171  384891 start.go:296] duration metric: took 91.150102ms for postStartSetup
	I1007 11:31:59.751218  384891 main.go:141] libmachine: (addons-246818) Calling .GetConfigRaw
	I1007 11:31:59.751830  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:31:59.754353  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.754726  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.754752  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.754998  384891 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/config.json ...
	I1007 11:31:59.755218  384891 start.go:128] duration metric: took 25.563019291s to createHost
	I1007 11:31:59.755244  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.757372  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.757682  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.757708  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.757833  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.757994  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.758133  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.758316  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.758481  384891 main.go:141] libmachine: Using SSH client type: native
	I1007 11:31:59.758651  384891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.141 22 <nil> <nil>}
	I1007 11:31:59.758660  384891 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 11:31:59.868422  384891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728300719.835078686
	
	I1007 11:31:59.868449  384891 fix.go:216] guest clock: 1728300719.835078686
	I1007 11:31:59.868459  384891 fix.go:229] Guest: 2024-10-07 11:31:59.835078686 +0000 UTC Remote: 2024-10-07 11:31:59.755232069 +0000 UTC m=+25.679693573 (delta=79.846617ms)
	I1007 11:31:59.868533  384891 fix.go:200] guest clock delta is within tolerance: 79.846617ms
	I1007 11:31:59.868543  384891 start.go:83] releasing machines lock for "addons-246818", held for 25.676492095s
	I1007 11:31:59.868570  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.868898  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:31:59.871581  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.871955  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.871981  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.872222  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.872811  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.872983  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:31:59.873091  384891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 11:31:59.873149  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.873159  384891 ssh_runner.go:195] Run: cat /version.json
	I1007 11:31:59.873181  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:31:59.875672  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.875703  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.876005  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.876042  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:31:59.876063  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.876076  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:31:59.876200  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.876338  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:31:59.876412  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.876507  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:31:59.876572  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.876743  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:31:59.876780  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.876890  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:31:59.978691  384891 ssh_runner.go:195] Run: systemctl --version
	I1007 11:31:59.985018  384891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 11:32:00.152322  384891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 11:32:00.158492  384891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 11:32:00.158593  384891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 11:32:00.176990  384891 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 11:32:00.177022  384891 start.go:495] detecting cgroup driver to use...
	I1007 11:32:00.177109  384891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 11:32:00.195687  384891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 11:32:00.211978  384891 docker.go:217] disabling cri-docker service (if available) ...
	I1007 11:32:00.212058  384891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 11:32:00.227604  384891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 11:32:00.242144  384891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 11:32:00.366315  384891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 11:32:00.526683  384891 docker.go:233] disabling docker service ...
	I1007 11:32:00.526776  384891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 11:32:00.541214  384891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 11:32:00.554981  384891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 11:32:00.685283  384891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 11:32:00.806166  384891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 11:32:00.821760  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 11:32:00.840995  384891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 11:32:00.841077  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.852364  384891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 11:32:00.852452  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.863984  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.875862  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.887376  384891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 11:32:00.899170  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.910698  384891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.928710  384891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 11:32:00.939899  384891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 11:32:00.950399  384891 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 11:32:00.950497  384891 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 11:32:00.964507  384891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 11:32:00.975096  384891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:32:01.103400  384891 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 11:32:01.206446  384891 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 11:32:01.206551  384891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 11:32:01.212082  384891 start.go:563] Will wait 60s for crictl version
	I1007 11:32:01.212179  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:32:01.216568  384891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 11:32:01.255513  384891 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 11:32:01.255616  384891 ssh_runner.go:195] Run: crio --version
	I1007 11:32:01.285883  384891 ssh_runner.go:195] Run: crio --version
	I1007 11:32:01.318274  384891 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 11:32:01.319603  384891 main.go:141] libmachine: (addons-246818) Calling .GetIP
	I1007 11:32:01.322312  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:01.322607  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:01.322642  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:01.322882  384891 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 11:32:01.328032  384891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:32:01.342592  384891 kubeadm.go:883] updating cluster {Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 11:32:01.342753  384891 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 11:32:01.342813  384891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:32:01.385519  384891 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 11:32:01.385605  384891 ssh_runner.go:195] Run: which lz4
	I1007 11:32:01.389912  384891 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 11:32:01.394513  384891 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 11:32:01.394572  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 11:32:02.800302  384891 crio.go:462] duration metric: took 1.410419336s to copy over tarball
	I1007 11:32:02.800451  384891 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 11:32:04.995474  384891 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.194982184s)
	I1007 11:32:04.995507  384891 crio.go:469] duration metric: took 2.195153422s to extract the tarball
	I1007 11:32:04.995518  384891 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 11:32:05.034133  384891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 11:32:05.081714  384891 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 11:32:05.081748  384891 cache_images.go:84] Images are preloaded, skipping loading
	I1007 11:32:05.081759  384891 kubeadm.go:934] updating node { 192.168.39.141 8443 v1.31.1 crio true true} ...
	I1007 11:32:05.081919  384891 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-246818 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 11:32:05.082006  384891 ssh_runner.go:195] Run: crio config
	I1007 11:32:05.126986  384891 cni.go:84] Creating CNI manager for ""
	I1007 11:32:05.127017  384891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:32:05.127029  384891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 11:32:05.127055  384891 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.141 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-246818 NodeName:addons-246818 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 11:32:05.127205  384891 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-246818"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 11:32:05.127271  384891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 11:32:05.138343  384891 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 11:32:05.138419  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 11:32:05.148540  384891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 11:32:05.166067  384891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 11:32:05.184173  384891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1007 11:32:05.202127  384891 ssh_runner.go:195] Run: grep 192.168.39.141	control-plane.minikube.internal$ /etc/hosts
	I1007 11:32:05.206447  384891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 11:32:05.219733  384891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:32:05.356364  384891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:32:05.374398  384891 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818 for IP: 192.168.39.141
	I1007 11:32:05.374431  384891 certs.go:194] generating shared ca certs ...
	I1007 11:32:05.374455  384891 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.374717  384891 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 11:32:05.569743  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt ...
	I1007 11:32:05.569780  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt: {Name:mka635174f873364a1d996695969f11525f0aad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.570000  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key ...
	I1007 11:32:05.570016  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key: {Name:mkb9f08978b906a4a69bf40b3648846639990aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.570120  384891 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 11:32:05.641034  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt ...
	I1007 11:32:05.641069  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt: {Name:mk6c2e0cb0b3463b53d4a7b8eca27330e83cad52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.641265  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key ...
	I1007 11:32:05.641279  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key: {Name:mkbd00d408f92ed97628a06bd31d4a22a55f1116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.641384  384891 certs.go:256] generating profile certs ...
	I1007 11:32:05.641459  384891 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.key
	I1007 11:32:05.641475  384891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt with IP's: []
	I1007 11:32:05.718596  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt ...
	I1007 11:32:05.718631  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: {Name:mk54791d72c1dd37de668acfdf6ae9b6a18b6816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.718824  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.key ...
	I1007 11:32:05.718838  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.key: {Name:mkc39919855b7ef97968b46dce56ec908abc99e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.718952  384891 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102
	I1007 11:32:05.719011  384891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.141]
	I1007 11:32:05.819688  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102 ...
	I1007 11:32:05.819722  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102: {Name:mkfaee04775ee1012712d288fadcabaf991b49f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.819920  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102 ...
	I1007 11:32:05.819938  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102: {Name:mkeee88413f174c6e33cb018157316e66b4b0927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.820040  384891 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt.9a110102 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt
	I1007 11:32:05.820118  384891 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key.9a110102 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key
	I1007 11:32:05.820163  384891 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key
	I1007 11:32:05.820181  384891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt with IP's: []
	I1007 11:32:05.968555  384891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt ...
	I1007 11:32:05.968602  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt: {Name:mk5df33635e69d6716681ea740275cc204f34bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.968800  384891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key ...
	I1007 11:32:05.968815  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key: {Name:mkf7d084582e160837c9ab4efc5b7bae6d92e36f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:05.969012  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 11:32:05.969068  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 11:32:05.969100  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 11:32:05.969125  384891 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 11:32:05.969737  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 11:32:05.995982  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 11:32:06.021458  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 11:32:06.050024  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 11:32:06.079964  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 11:32:06.108572  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 11:32:06.135463  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 11:32:06.162035  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 11:32:06.186675  384891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 11:32:06.216268  384891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 11:32:06.234408  384891 ssh_runner.go:195] Run: openssl version
	I1007 11:32:06.240683  384891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 11:32:06.252555  384891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:32:06.257813  384891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:32:06.257897  384891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 11:32:06.264471  384891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 11:32:06.276095  384891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 11:32:06.280492  384891 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 11:32:06.280573  384891 kubeadm.go:392] StartCluster: {Name:addons-246818 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:addons-246818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:32:06.280683  384891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 11:32:06.280788  384891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 11:32:06.325293  384891 cri.go:89] found id: ""
	I1007 11:32:06.325397  384891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 11:32:06.338096  384891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 11:32:06.348756  384891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 11:32:06.359237  384891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 11:32:06.359265  384891 kubeadm.go:157] found existing configuration files:
	
	I1007 11:32:06.359321  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 11:32:06.369410  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 11:32:06.369502  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 11:32:06.380168  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 11:32:06.390519  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 11:32:06.390589  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 11:32:06.401125  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 11:32:06.411429  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 11:32:06.411496  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 11:32:06.422449  384891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 11:32:06.432934  384891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 11:32:06.433018  384891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 11:32:06.444113  384891 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 11:32:06.499524  384891 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 11:32:06.499599  384891 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 11:32:06.604372  384891 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 11:32:06.604511  384891 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 11:32:06.604590  384891 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 11:32:06.621867  384891 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 11:32:06.753861  384891 out.go:235]   - Generating certificates and keys ...
	I1007 11:32:06.753997  384891 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 11:32:06.754108  384891 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 11:32:06.754241  384891 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 11:32:06.907525  384891 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 11:32:07.081367  384891 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 11:32:07.235517  384891 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 11:32:07.323576  384891 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 11:32:07.323734  384891 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-246818 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1007 11:32:07.484355  384891 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 11:32:07.484552  384891 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-246818 localhost] and IPs [192.168.39.141 127.0.0.1 ::1]
	I1007 11:32:07.690609  384891 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 11:32:07.921485  384891 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 11:32:08.090512  384891 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 11:32:08.090799  384891 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 11:32:08.402148  384891 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 11:32:08.478195  384891 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 11:32:08.612503  384891 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 11:32:08.702731  384891 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 11:32:09.158663  384891 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 11:32:09.159440  384891 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 11:32:09.161819  384891 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 11:32:09.167042  384891 out.go:235]   - Booting up control plane ...
	I1007 11:32:09.167167  384891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 11:32:09.167249  384891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 11:32:09.167364  384891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 11:32:09.179881  384891 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 11:32:09.189965  384891 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 11:32:09.190035  384891 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 11:32:09.324400  384891 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 11:32:09.324529  384891 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 11:32:09.831332  384891 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 506.899298ms
	I1007 11:32:09.831474  384891 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 11:32:15.831159  384891 kubeadm.go:310] [api-check] The API server is healthy after 6.001731023s
	I1007 11:32:15.856870  384891 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 11:32:15.879662  384891 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 11:32:15.920548  384891 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 11:32:15.920789  384891 kubeadm.go:310] [mark-control-plane] Marking the node addons-246818 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 11:32:15.939440  384891 kubeadm.go:310] [bootstrap-token] Using token: bpaf5t.csjf2xhv6gacp46a
	I1007 11:32:15.940908  384891 out.go:235]   - Configuring RBAC rules ...
	I1007 11:32:15.941047  384891 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 11:32:15.948031  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 11:32:15.960728  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 11:32:15.964750  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 11:32:15.968808  384891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 11:32:15.973958  384891 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 11:32:16.238653  384891 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 11:32:16.679433  384891 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 11:32:17.237909  384891 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 11:32:17.237938  384891 kubeadm.go:310] 
	I1007 11:32:17.238007  384891 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 11:32:17.238014  384891 kubeadm.go:310] 
	I1007 11:32:17.238117  384891 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 11:32:17.238128  384891 kubeadm.go:310] 
	I1007 11:32:17.238155  384891 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 11:32:17.238231  384891 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 11:32:17.238300  384891 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 11:32:17.238310  384891 kubeadm.go:310] 
	I1007 11:32:17.238377  384891 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 11:32:17.238388  384891 kubeadm.go:310] 
	I1007 11:32:17.238446  384891 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 11:32:17.238488  384891 kubeadm.go:310] 
	I1007 11:32:17.238579  384891 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 11:32:17.238753  384891 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 11:32:17.238851  384891 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 11:32:17.238863  384891 kubeadm.go:310] 
	I1007 11:32:17.238995  384891 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 11:32:17.239104  384891 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 11:32:17.239114  384891 kubeadm.go:310] 
	I1007 11:32:17.239246  384891 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bpaf5t.csjf2xhv6gacp46a \
	I1007 11:32:17.239371  384891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 \
	I1007 11:32:17.239410  384891 kubeadm.go:310] 	--control-plane 
	I1007 11:32:17.239423  384891 kubeadm.go:310] 
	I1007 11:32:17.239519  384891 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 11:32:17.239531  384891 kubeadm.go:310] 
	I1007 11:32:17.239632  384891 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bpaf5t.csjf2xhv6gacp46a \
	I1007 11:32:17.239752  384891 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 
	I1007 11:32:17.240386  384891 kubeadm.go:310] W1007 11:32:06.469101     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:32:17.240693  384891 kubeadm.go:310] W1007 11:32:06.469905     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 11:32:17.240786  384891 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 11:32:17.240815  384891 cni.go:84] Creating CNI manager for ""
	I1007 11:32:17.240824  384891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:32:17.242992  384891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 11:32:17.244570  384891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 11:32:17.255322  384891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 11:32:17.274225  384891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 11:32:17.274381  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:17.274395  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-246818 minikube.k8s.io/updated_at=2024_10_07T11_32_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=addons-246818 minikube.k8s.io/primary=true
	I1007 11:32:17.305991  384891 ops.go:34] apiserver oom_adj: -16
	I1007 11:32:17.433612  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:17.933706  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:18.434006  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:18.934513  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:19.434172  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:19.933925  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:20.434498  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:20.934340  384891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 11:32:21.035626  384891 kubeadm.go:1113] duration metric: took 3.76133711s to wait for elevateKubeSystemPrivileges
	I1007 11:32:21.035692  384891 kubeadm.go:394] duration metric: took 14.755128051s to StartCluster
	I1007 11:32:21.035722  384891 settings.go:142] acquiring lock: {Name:mk1ff033f29b570679652ae5ee30e0799b0658dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:21.035877  384891 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:32:21.036315  384891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 11:32:21.036557  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 11:32:21.036565  384891 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 11:32:21.036649  384891 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 11:32:21.036807  384891 addons.go:69] Setting storage-provisioner=true in profile "addons-246818"
	I1007 11:32:21.036827  384891 addons.go:69] Setting gcp-auth=true in profile "addons-246818"
	I1007 11:32:21.036828  384891 addons.go:69] Setting volcano=true in profile "addons-246818"
	I1007 11:32:21.036807  384891 addons.go:69] Setting inspektor-gadget=true in profile "addons-246818"
	I1007 11:32:21.036852  384891 addons.go:234] Setting addon inspektor-gadget=true in "addons-246818"
	I1007 11:32:21.036853  384891 addons.go:234] Setting addon volcano=true in "addons-246818"
	I1007 11:32:21.036849  384891 addons.go:69] Setting default-storageclass=true in profile "addons-246818"
	I1007 11:32:21.036869  384891 addons.go:69] Setting ingress-dns=true in profile "addons-246818"
	I1007 11:32:21.036879  384891 addons.go:234] Setting addon ingress-dns=true in "addons-246818"
	I1007 11:32:21.036892  384891 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-246818"
	I1007 11:32:21.036910  384891 addons.go:69] Setting metrics-server=true in profile "addons-246818"
	I1007 11:32:21.036924  384891 addons.go:69] Setting registry=true in profile "addons-246818"
	I1007 11:32:21.036927  384891 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-246818"
	I1007 11:32:21.036936  384891 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-246818"
	I1007 11:32:21.036940  384891 addons.go:69] Setting cloud-spanner=true in profile "addons-246818"
	I1007 11:32:21.036952  384891 addons.go:234] Setting addon cloud-spanner=true in "addons-246818"
	I1007 11:32:21.036961  384891 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-246818"
	I1007 11:32:21.036975  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036978  384891 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-246818"
	I1007 11:32:21.036993  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036861  384891 addons.go:69] Setting ingress=true in profile "addons-246818"
	I1007 11:32:21.037030  384891 addons.go:234] Setting addon ingress=true in "addons-246818"
	I1007 11:32:21.037061  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036928  384891 addons.go:234] Setting addon metrics-server=true in "addons-246818"
	I1007 11:32:21.037120  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.037350  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037366  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.036838  384891 addons.go:234] Setting addon storage-provisioner=true in "addons-246818"
	I1007 11:32:21.037391  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037400  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036999  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.037497  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037522  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037549  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037552  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037582  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037557  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037628  384891 addons.go:69] Setting yakd=true in profile "addons-246818"
	I1007 11:32:21.037646  384891 addons.go:234] Setting addon yakd=true in "addons-246818"
	I1007 11:32:21.037680  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036940  384891 addons.go:234] Setting addon registry=true in "addons-246818"
	I1007 11:32:21.037693  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.037718  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.037722  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037828  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.036910  384891 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-246818"
	I1007 11:32:21.037863  384891 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-246818"
	I1007 11:32:21.037867  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.037869  384891 config.go:182] Loaded profile config "addons-246818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:32:21.036900  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.038071  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.038102  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.036900  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.036915  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.038396  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.038456  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.036853  384891 mustload.go:65] Loading cluster: addons-246818
	I1007 11:32:21.037607  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.036926  384891 addons.go:69] Setting volumesnapshots=true in profile "addons-246818"
	I1007 11:32:21.038612  384891 addons.go:234] Setting addon volumesnapshots=true in "addons-246818"
	I1007 11:32:21.038845  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.038991  384891 config.go:182] Loaded profile config "addons-246818": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:32:21.039002  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.039392  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.039450  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.038918  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.039508  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.038917  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.039622  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.038947  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.038892  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.040135  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.043624  384891 out.go:177] * Verifying Kubernetes components...
	I1007 11:32:21.045277  384891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 11:32:21.059674  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40459
	I1007 11:32:21.059886  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34353
	I1007 11:32:21.060116  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45897
	I1007 11:32:21.060236  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34293
	I1007 11:32:21.060237  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.060363  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.060626  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.060914  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.060941  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.061120  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.061149  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.061246  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.061270  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.061308  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.061479  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.061589  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.061687  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.061936  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I1007 11:32:21.062180  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.062193  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.062201  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.062216  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.062230  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.062656  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.062682  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.062857  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.063038  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.079607  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.079643  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.079880  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.079926  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.080116  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.080148  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.080156  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43831
	I1007 11:32:21.080301  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I1007 11:32:21.080981  384891 addons.go:234] Setting addon default-storageclass=true in "addons-246818"
	I1007 11:32:21.081031  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.081396  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.081445  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.081570  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.081657  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.081692  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.082569  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.082591  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.082721  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.082731  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.082825  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.082859  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.083559  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.083625  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.084318  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.084370  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.095528  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I1007 11:32:21.097818  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I1007 11:32:21.098201  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.098902  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.098927  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.099603  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.100289  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.100343  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.100410  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I1007 11:32:21.100514  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42121
	I1007 11:32:21.100846  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.101205  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.101253  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.101833  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.101860  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.101981  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.102007  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.102113  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.102128  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.102370  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.102568  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.102933  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.102979  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.103022  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.103397  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.103433  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.103660  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.103694  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.113877  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33341
	I1007 11:32:21.114643  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.115420  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.115457  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.115864  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.116171  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.120249  384891 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-246818"
	I1007 11:32:21.120318  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.120889  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.120968  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.122908  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42529
	I1007 11:32:21.123632  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.123722  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.123949  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.124128  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43957
	I1007 11:32:21.124615  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.125161  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.125181  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.125325  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.125337  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.125531  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36931
	I1007 11:32:21.125965  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.126199  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.126337  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.126554  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.127633  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.128389  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.128408  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.128475  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.129155  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.129312  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37857
	I1007 11:32:21.129767  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42759
	I1007 11:32:21.130331  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.130464  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.131079  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.131105  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.131107  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.131163  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.131263  384891 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 11:32:21.131344  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 11:32:21.131653  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.131733  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I1007 11:32:21.131896  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.132323  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.132906  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 11:32:21.132924  384891 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 11:32:21.132947  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.133027  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.133041  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.133528  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.133751  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.134899  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:32:21.135060  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.136912  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.137373  384891 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 11:32:21.138188  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.138641  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.138667  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.139051  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.139278  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.139296  384891 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:32:21.139317  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 11:32:21.139349  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.139409  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.139420  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:32:21.139532  384891 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 11:32:21.140022  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.140246  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.141237  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 11:32:21.141257  384891 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 11:32:21.141282  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.141668  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.141695  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.141761  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34671
	I1007 11:32:21.142266  384891 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:32:21.142440  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 11:32:21.142466  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.144235  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1007 11:32:21.145460  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.145517  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.145588  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.146385  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.146417  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.146860  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.146879  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.147046  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.147059  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.147114  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.147158  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.147367  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.147399  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.147622  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.147702  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.147719  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.147904  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.147959  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.148109  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.148421  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.148482  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I1007 11:32:21.148649  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.148707  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35463
	I1007 11:32:21.148836  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.149316  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.149355  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.149633  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.149739  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.149828  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.150158  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.150216  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.150473  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.150757  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.150905  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.150919  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.151003  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:21.151012  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:21.154104  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:21.154210  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.154235  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.154317  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.154383  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.154396  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.154417  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:21.154428  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.154441  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:21.154447  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:21.154455  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:21.154462  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:21.154491  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.154529  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.154555  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43077
	I1007 11:32:21.154584  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.154625  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.154653  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I1007 11:32:21.154704  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:21.154725  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:21.154732  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:21.154758  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	W1007 11:32:21.154823  384891 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1007 11:32:21.155361  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.155377  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.155408  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.155410  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.156096  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.156098  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.156159  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.156308  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.156328  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.156406  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44697
	I1007 11:32:21.156880  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.156968  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.157016  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.157057  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.157424  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:21.157456  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:21.158097  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.158115  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.158531  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.158741  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.159645  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.161490  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.162042  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.162115  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.163859  384891 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 11:32:21.163880  384891 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 11:32:21.163859  384891 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 11:32:21.165361  384891 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 11:32:21.165385  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 11:32:21.165391  384891 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:32:21.165409  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 11:32:21.165411  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.165429  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.166616  384891 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 11:32:21.167980  384891 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 11:32:21.167999  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 11:32:21.168025  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.170468  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.171175  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.171703  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.171726  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.171772  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.172008  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.172069  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.172087  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.172117  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.172343  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.172387  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.172430  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.172550  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.172611  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.172790  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.172809  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.173186  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.173368  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.173431  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.173849  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.174000  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.178470  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I1007 11:32:21.178919  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.179445  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40795
	I1007 11:32:21.179523  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.179546  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.179982  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.180089  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.180539  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.180594  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.180597  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
	I1007 11:32:21.180610  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.180961  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.181131  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.181387  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.181501  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35233
	I1007 11:32:21.181867  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.181944  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.181962  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.182396  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.182521  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.182535  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.182653  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.182767  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.183119  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.183140  384891 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 11:32:21.183154  384891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 11:32:21.183180  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.183341  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.185163  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.186316  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.187476  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.188077  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.188103  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.188214  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 11:32:21.188299  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.188343  384891 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 11:32:21.188505  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.188541  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I1007 11:32:21.188671  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.188708  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34025
	I1007 11:32:21.188930  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.188981  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.189347  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.189515  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.189531  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.189865  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 11:32:21.189883  384891 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 11:32:21.189902  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.189865  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.190077  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.190097  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.190187  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.190696  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.190711  384891 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 11:32:21.190734  384891 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 11:32:21.190756  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.191383  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.194537  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.194635  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.195445  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.195483  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.195505  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.195967  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42707
	I1007 11:32:21.196198  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.196207  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.196231  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.196419  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.196513  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.196561  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.196559  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:21.196717  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.196754  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.196824  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.196885  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.197100  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:21.197145  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.197116  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:21.197531  384891 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 11:32:21.197717  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:21.198163  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:21.198321  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 11:32:21.199810  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:21.199881  384891 out.go:177]   - Using image docker.io/busybox:stable
	I1007 11:32:21.199889  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 11:32:21.201263  384891 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 11:32:21.202581  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 11:32:21.202672  384891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:32:21.202687  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 11:32:21.202707  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.203143  384891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:32:21.203162  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 11:32:21.203188  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.205432  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 11:32:21.206350  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.206434  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.206694  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.206752  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.206778  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.206783  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.207047  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.207116  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.207206  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.207253  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.207304  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.207347  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.207390  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.207667  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:21.208112  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 11:32:21.209535  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	W1007 11:32:21.210345  384891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50694->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.210375  384891 retry.go:31] will retry after 169.209619ms: ssh: handshake failed: read tcp 192.168.39.1:50694->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.212576  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 11:32:21.213890  384891 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 11:32:21.214984  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 11:32:21.215006  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 11:32:21.215033  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:21.218251  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.218699  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:21.218755  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:21.218955  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:21.219220  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:21.219366  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:21.219512  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	W1007 11:32:21.380838  384891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:50722->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.380877  384891 retry.go:31] will retry after 486.807101ms: ssh: handshake failed: read tcp 192.168.39.1:50722->192.168.39.141:22: read: connection reset by peer
	I1007 11:32:21.569888  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 11:32:21.662408  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 11:32:21.671323  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 11:32:21.671359  384891 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 11:32:21.677079  384891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 11:32:21.677113  384891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 11:32:21.717464  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 11:32:21.717508  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 11:32:21.721131  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 11:32:21.721162  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 11:32:21.726314  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 11:32:21.738766  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 11:32:21.751504  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 11:32:21.781874  384891 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 11:32:21.781907  384891 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 11:32:21.814479  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 11:32:21.824071  384891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 11:32:21.824369  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 11:32:21.836461  384891 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 11:32:21.836512  384891 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 11:32:21.850533  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 11:32:21.850563  384891 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 11:32:21.901980  384891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 11:32:21.902023  384891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 11:32:21.930371  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 11:32:21.930410  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 11:32:21.939212  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 11:32:21.939255  384891 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 11:32:21.953019  384891 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:32:21.953053  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 11:32:22.048099  384891 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 11:32:22.048134  384891 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 11:32:22.121023  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 11:32:22.121067  384891 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 11:32:22.190982  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 11:32:22.200335  384891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 11:32:22.200368  384891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 11:32:22.226689  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 11:32:22.226728  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 11:32:22.254471  384891 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 11:32:22.254515  384891 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 11:32:22.284154  384891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:32:22.284192  384891 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 11:32:22.355775  384891 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:32:22.355802  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 11:32:22.460686  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 11:32:22.460719  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 11:32:22.471081  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 11:32:22.471115  384891 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 11:32:22.474890  384891 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 11:32:22.474914  384891 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 11:32:22.505581  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 11:32:22.509236  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 11:32:22.540551  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 11:32:22.706336  384891 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:32:22.706365  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 11:32:22.757067  384891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 11:32:22.757099  384891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 11:32:22.851444  384891 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 11:32:22.851479  384891 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 11:32:22.979312  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:32:23.037624  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 11:32:23.037665  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 11:32:23.181268  384891 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 11:32:23.181304  384891 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 11:32:23.329836  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 11:32:23.329871  384891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 11:32:23.422160  384891 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 11:32:23.422204  384891 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 11:32:23.701377  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 11:32:23.701416  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 11:32:23.717985  384891 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:32:23.718012  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 11:32:23.962990  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 11:32:23.963023  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 11:32:24.062714  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 11:32:24.267101  384891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:32:24.267134  384891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 11:32:24.488660  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 11:32:28.211807  384891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 11:32:28.211865  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:28.215550  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:28.216113  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:28.216153  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:28.216343  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:28.216613  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:28.216834  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:28.217015  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:28.781684  384891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 11:32:29.027350  384891 addons.go:234] Setting addon gcp-auth=true in "addons-246818"
	I1007 11:32:29.027409  384891 host.go:66] Checking if "addons-246818" exists ...
	I1007 11:32:29.027725  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:29.027785  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:29.045375  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45379
	I1007 11:32:29.046015  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:29.046676  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:29.046709  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:29.047110  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:29.047622  384891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:32:29.047675  384891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:32:29.064290  384891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I1007 11:32:29.064871  384891 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:32:29.065411  384891 main.go:141] libmachine: Using API Version  1
	I1007 11:32:29.065438  384891 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:32:29.065798  384891 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:32:29.066019  384891 main.go:141] libmachine: (addons-246818) Calling .GetState
	I1007 11:32:29.068256  384891 main.go:141] libmachine: (addons-246818) Calling .DriverName
	I1007 11:32:29.068576  384891 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 11:32:29.068609  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHHostname
	I1007 11:32:29.071318  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:29.071806  384891 main.go:141] libmachine: (addons-246818) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:d7:db", ip: ""} in network mk-addons-246818: {Iface:virbr1 ExpiryTime:2024-10-07 12:31:49 +0000 UTC Type:0 Mac:52:54:00:b1:d7:db Iaid: IPaddr:192.168.39.141 Prefix:24 Hostname:addons-246818 Clientid:01:52:54:00:b1:d7:db}
	I1007 11:32:29.071836  384891 main.go:141] libmachine: (addons-246818) DBG | domain addons-246818 has defined IP address 192.168.39.141 and MAC address 52:54:00:b1:d7:db in network mk-addons-246818
	I1007 11:32:29.072091  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHPort
	I1007 11:32:29.072359  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHKeyPath
	I1007 11:32:29.072612  384891 main.go:141] libmachine: (addons-246818) Calling .GetSSHUsername
	I1007 11:32:29.072814  384891 sshutil.go:53] new ssh client: &{IP:192.168.39.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/addons-246818/id_rsa Username:docker}
	I1007 11:32:30.065708  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.403252117s)
	I1007 11:32:30.065784  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065796  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065811  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.33946418s)
	I1007 11:32:30.065857  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065865  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.495938324s)
	I1007 11:32:30.065881  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065898  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.327105535s)
	I1007 11:32:30.065926  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065900  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.065941  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065947  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.065956  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.314410411s)
	I1007 11:32:30.066001  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066014  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066107  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.251596479s)
	I1007 11:32:30.066132  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066140  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066201  384891 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.242099217s)
	I1007 11:32:30.066343  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.066347  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.066368  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.066367  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.066377  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066385  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066443  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.066444  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.066450  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.066458  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066464  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066496  384891 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.242103231s)
	I1007 11:32:30.066525  384891 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 11:32:30.066633  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.875604078s)
	I1007 11:32:30.066671  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066686  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066701  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.066711  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.066719  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066726  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066812  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.56119506s)
	I1007 11:32:30.066833  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066844  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.066928  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.557663248s)
	I1007 11:32:30.066946  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.066954  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067053  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067070  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.526488091s)
	I1007 11:32:30.067077  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067083  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067087  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067090  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067097  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067099  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067273  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.087920249s)
	W1007 11:32:30.067306  384891 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:32:30.067334  384891 retry.go:31] will retry after 318.73232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 11:32:30.067431  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.004678888s)
	I1007 11:32:30.067452  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067472  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067555  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067585  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067595  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067604  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067610  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.067660  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067681  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067687  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067878  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.067912  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.067919  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.067926  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.067932  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.070203  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.070251  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.070258  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.070269  384891 addons.go:475] Verifying addon ingress=true in "addons-246818"
	I1007 11:32:30.070513  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.070568  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.070582  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.071060  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.071101  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.071110  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.071123  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.071132  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.071872  384891 out.go:177] * Verifying ingress addon...
	I1007 11:32:30.072804  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.072826  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072856  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.072870  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.072262  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072292  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.072969  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072327  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072351  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.072993  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073063  384891 node_ready.go:35] waiting up to 6m0s for node "addons-246818" to be "Ready" ...
	I1007 11:32:30.073157  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073172  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072402  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072428  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073301  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.072444  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072472  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073375  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073383  384891 addons.go:475] Verifying addon registry=true in "addons-246818"
	I1007 11:32:30.072519  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072542  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073455  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073743  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.073754  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.072602  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072689  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.072713  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073830  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073838  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.073844  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.072981  384891 addons.go:475] Verifying addon metrics-server=true in "addons-246818"
	I1007 11:32:30.072586  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.073928  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.073935  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.073941  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.074316  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.074355  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.074361  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.074555  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.074692  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.074699  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.074712  384891 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-246818 service yakd-dashboard -n yakd-dashboard
	
	I1007 11:32:30.074754  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.074782  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.074788  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.075159  384891 out.go:177] * Verifying registry addon...
	I1007 11:32:30.077150  384891 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 11:32:30.077593  384891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 11:32:30.087836  384891 node_ready.go:49] node "addons-246818" has status "Ready":"True"
	I1007 11:32:30.087865  384891 node_ready.go:38] duration metric: took 14.756038ms for node "addons-246818" to be "Ready" ...
	I1007 11:32:30.087879  384891 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:32:30.092003  384891 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 11:32:30.092039  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:30.095848  384891 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 11:32:30.095879  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:30.110889  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.110919  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.111265  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.111273  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.111288  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	W1007 11:32:30.111382  384891 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1007 11:32:30.120282  384891 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9n6rn" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.121748  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:30.121764  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:30.122055  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:30.122109  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:30.122125  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:30.155261  384891 pod_ready.go:93] pod "coredns-7c65d6cfc9-9n6rn" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.155289  384891 pod_ready.go:82] duration metric: took 34.974077ms for pod "coredns-7c65d6cfc9-9n6rn" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.155302  384891 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dzpc8" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.178588  384891 pod_ready.go:93] pod "coredns-7c65d6cfc9-dzpc8" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.178617  384891 pod_ready.go:82] duration metric: took 23.305528ms for pod "coredns-7c65d6cfc9-dzpc8" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.178629  384891 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.223158  384891 pod_ready.go:93] pod "etcd-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.223187  384891 pod_ready.go:82] duration metric: took 44.549581ms for pod "etcd-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.223197  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.253914  384891 pod_ready.go:93] pod "kube-apiserver-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.253941  384891 pod_ready.go:82] duration metric: took 30.73707ms for pod "kube-apiserver-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.253954  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.386868  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 11:32:30.476890  384891 pod_ready.go:93] pod "kube-controller-manager-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.476938  384891 pod_ready.go:82] duration metric: took 222.974328ms for pod "kube-controller-manager-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.476959  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l8kql" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.571544  384891 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-246818" context rescaled to 1 replicas
	I1007 11:32:30.582503  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:30.582873  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:30.914008  384891 pod_ready.go:93] pod "kube-proxy-l8kql" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:30.914040  384891 pod_ready.go:82] duration metric: took 437.071606ms for pod "kube-proxy-l8kql" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:30.914052  384891 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:31.084293  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:31.084904  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:31.277897  384891 pod_ready.go:93] pod "kube-scheduler-addons-246818" in "kube-system" namespace has status "Ready":"True"
	I1007 11:32:31.277934  384891 pod_ready.go:82] duration metric: took 363.871437ms for pod "kube-scheduler-addons-246818" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:31.277953  384891 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace to be "Ready" ...
	I1007 11:32:31.587346  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:31.587502  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:32.188862  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:32.296683  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:32.466486  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.977770361s)
	I1007 11:32:32.466545  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.466560  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.466611  384891 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.39800642s)
	I1007 11:32:32.466755  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.0798406s)
	I1007 11:32:32.466832  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.466844  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.466862  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.466889  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:32.466906  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.466915  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.466922  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.467112  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.467127  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.467136  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:32.467143  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:32.467213  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:32.467225  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.467235  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.467250  384891 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-246818"
	I1007 11:32:32.467411  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:32.467414  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:32.467424  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:32.468956  384891 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 11:32:32.469005  384891 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 11:32:32.470557  384891 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 11:32:32.471269  384891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 11:32:32.472164  384891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 11:32:32.472191  384891 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 11:32:32.502795  384891 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 11:32:32.502824  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:32.554269  384891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 11:32:32.554306  384891 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 11:32:32.588477  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:32.588751  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:32.633642  384891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:32:32.633670  384891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 11:32:32.817741  384891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 11:32:32.975678  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:33.085784  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:33.086499  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:33.284978  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:33.476686  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:33.582171  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:33.582790  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:33.982427  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:34.084906  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:34.085799  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:34.308214  384891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.490411942s)
	I1007 11:32:34.308309  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:34.308332  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:34.308649  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:34.308705  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:34.308723  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:34.308741  384891 main.go:141] libmachine: Making call to close driver server
	I1007 11:32:34.308752  384891 main.go:141] libmachine: (addons-246818) Calling .Close
	I1007 11:32:34.309132  384891 main.go:141] libmachine: (addons-246818) DBG | Closing plugin on server side
	I1007 11:32:34.309186  384891 main.go:141] libmachine: Successfully made call to close driver server
	I1007 11:32:34.309202  384891 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 11:32:34.310559  384891 addons.go:475] Verifying addon gcp-auth=true in "addons-246818"
	I1007 11:32:34.312007  384891 out.go:177] * Verifying gcp-auth addon...
	I1007 11:32:34.314730  384891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 11:32:34.340586  384891 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 11:32:34.340612  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:34.475714  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:34.582546  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:34.583308  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:34.818688  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:34.976405  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:35.082601  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:35.084039  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:35.285036  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:35.318158  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:35.477972  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:35.583376  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:35.583561  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:35.819531  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:35.975590  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:36.082179  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:36.082337  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:36.319330  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:36.476751  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:36.582692  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:36.584000  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:37.005486  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:37.006535  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:37.083365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:37.083910  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:37.287981  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:37.319722  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:37.477822  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:37.581529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:37.582720  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:37.819884  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:37.976935  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:38.082033  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:38.082405  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:38.318841  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:38.475607  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:38.581655  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:38.582226  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:38.819241  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:38.976848  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:39.082867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:39.083274  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:39.290395  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:39.318648  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:39.476451  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:39.582171  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:39.582624  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:39.819410  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:39.977333  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:40.081612  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:40.082203  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:40.319145  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:40.476723  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:40.581603  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:40.583149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:40.818385  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:40.977851  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:41.083017  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:41.083342  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:41.317798  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:41.475982  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:41.582409  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:41.582455  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:41.786127  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:41.819529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:41.976946  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:42.082000  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:42.082192  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:42.318601  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:42.475545  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:42.582736  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:42.583438  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:42.818333  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:42.976980  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:43.083098  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:43.083595  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:43.318576  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:43.503845  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:43.582649  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:43.583155  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:43.818278  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:43.976805  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:44.082470  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:44.082807  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:44.284958  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:44.319223  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:44.476657  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:44.582711  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:44.583066  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:44.818827  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:44.976149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:45.082276  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:45.082484  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:45.318464  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:45.476894  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:45.610547  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:45.610833  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:45.975833  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:45.996872  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:46.082114  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:46.082777  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:46.317822  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:46.476436  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:46.582945  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:46.583120  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:46.784162  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:46.818445  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:46.976526  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:47.082671  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:47.082833  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:47.319655  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:47.476921  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:47.581622  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:47.582699  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:47.818529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:47.977011  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:48.084165  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:48.086044  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:48.319215  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:48.484879  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:48.582304  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:48.582986  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:48.818694  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:48.976728  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:49.081291  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:49.082282  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:49.283787  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:49.318639  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:49.476339  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:49.582576  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:49.582919  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:49.818304  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:49.976650  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:50.081972  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:50.083388  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:50.319189  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:50.476949  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:50.581903  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:50.582534  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:51.138429  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:51.138593  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:51.139224  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:51.139625  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:51.284853  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:51.319510  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:51.478092  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:51.582296  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:51.583977  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:51.821388  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:51.977408  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:52.082306  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:52.082725  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:52.320270  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:52.477071  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:52.581676  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:52.582004  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:52.819335  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:52.976826  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:53.081715  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:53.082217  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:53.286270  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:53.318565  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:53.476657  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:53.582416  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:53.582912  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:53.821038  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:53.976548  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:54.083018  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:54.083157  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:54.318909  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:54.480652  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:54.583081  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:54.583782  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:54.819006  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:54.976399  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:55.081741  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:55.082950  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:55.318290  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:55.477525  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:55.582408  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:55.582694  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:55.784044  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:55.819410  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:55.976273  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:56.081493  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:56.081873  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:56.319113  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:56.476767  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:56.582149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:56.582756  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:56.818865  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:56.977253  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:57.081925  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:57.082420  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:57.318929  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:57.785145  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:57.785322  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:57.785444  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:57.799701  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:32:57.875340  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:57.976458  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:58.082124  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:58.082502  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:58.318902  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:58.476352  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:58.583758  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:58.583953  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:58.817729  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:58.975913  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:59.084032  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:59.086065  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:59.346848  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:59.476648  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:32:59.582942  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:32:59.584115  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:32:59.821365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:32:59.986819  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:00.081462  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:00.083518  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:00.287257  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:00.320992  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:00.476599  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:00.583058  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:00.583512  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:00.818832  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:00.976928  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:01.082142  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:01.082422  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:01.320347  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:01.476916  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:01.581829  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 11:33:01.582058  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:01.824411  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:01.978086  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:02.082410  384891 kapi.go:107] duration metric: took 32.004807404s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 11:33:02.082721  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:02.318823  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:02.476149  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:02.581365  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:02.785380  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:02.819435  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:02.981119  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:03.082298  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:03.318836  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:03.475816  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:03.581866  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:03.820271  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:03.977531  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:04.081370  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:04.318861  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:04.478185  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:04.581057  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:04.786095  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:04.818861  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:04.977359  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:05.081577  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:05.319021  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:05.476415  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:05.582041  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:05.817893  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:05.977602  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:06.081923  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:06.319212  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:06.477018  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:06.582023  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:06.818841  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:06.976129  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:07.082189  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:07.286377  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:07.319883  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:07.476167  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:07.582756  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:07.818624  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:07.977713  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:08.081834  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:08.319188  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:08.477158  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:08.582912  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:08.818256  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:08.976773  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:09.082355  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:09.319241  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:09.476152  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:09.581908  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:09.784186  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:09.817949  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:09.976974  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:10.082168  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:10.318356  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:10.477137  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:10.581246  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:10.819236  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:10.976625  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:11.082510  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:11.319088  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:11.475963  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:11.581311  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:11.785390  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:11.818393  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:11.977640  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:12.081174  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:12.319522  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:12.476944  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:12.582131  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:12.818446  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:12.976621  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:13.081988  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:13.318911  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:13.484798  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:13.582395  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:13.819383  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:13.977648  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:14.082158  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:14.285577  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:14.318713  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:14.475847  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:14.582159  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:14.818441  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:14.977209  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:15.081963  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:15.318737  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:15.476205  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:15.583061  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:15.819153  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:15.976561  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:16.081683  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:16.318410  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:16.476630  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:16.581615  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:16.784072  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:16.818076  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:16.977198  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:17.081611  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:17.320061  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:17.476515  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:17.581786  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:17.818618  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:17.976464  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:18.084173  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:18.318030  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:18.477107  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:18.586160  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:18.784408  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:18.818855  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:18.975975  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:19.083601  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:19.319129  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:19.476165  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:19.581505  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:19.818001  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:19.976718  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:20.082101  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:20.319192  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:20.476616  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:20.581717  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:20.785149  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:20.818020  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:20.976775  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:21.082210  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:21.318711  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:21.475778  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:21.582480  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:21.819356  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:21.977763  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:22.082225  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:22.318697  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:22.476177  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:22.582015  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:22.817984  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:22.976500  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:23.081605  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:23.284652  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:23.319106  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:23.476419  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:23.581621  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:23.818519  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:23.976857  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:24.082273  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:24.319210  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:24.476471  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:24.581691  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:24.818346  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:24.976944  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:25.082182  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:25.285349  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:25.319385  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:25.476777  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:25.582609  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:25.818485  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:25.977168  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:26.082176  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:26.318509  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:26.476390  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:26.581578  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:26.819122  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:26.976649  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:27.081846  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:27.285801  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:27.319965  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:27.476748  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:27.582786  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:27.820119  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:27.977567  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:28.081776  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:28.321486  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:28.476034  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:28.580919  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:28.818302  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:28.976750  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:29.082261  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:29.318773  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:29.476952  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:29.582302  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:29.784755  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:29.818641  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:29.975885  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:30.082754  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:30.318788  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:30.476267  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:30.581482  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:30.818790  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:30.976169  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:31.082040  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:31.318394  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:31.477328  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:31.581590  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:31.785001  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:31.818455  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:31.977285  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:32.082645  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:32.319761  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:32.475996  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:32.580957  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:32.818618  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:32.981189  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:33.082222  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:33.318499  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:33.477371  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:33.581430  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:33.819139  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:33.976629  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:34.348998  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:34.349111  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:34.354582  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:34.477183  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:34.582017  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:34.818854  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:34.975708  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:35.082682  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:35.318096  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:35.476479  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:35.581982  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:35.818348  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:35.976667  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:36.082093  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:36.319301  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:36.477260  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:36.581116  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:36.785438  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:36.818479  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:36.976498  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:37.081603  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:37.318719  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:37.476366  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:37.582055  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:37.818735  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:37.975866  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:38.081879  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:38.318601  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:38.484592  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:38.582279  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:38.818547  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:38.975841  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:39.081986  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:39.284349  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:39.317923  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:39.476365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:39.582175  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:39.818974  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:39.975890  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:40.082033  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:40.318628  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:40.518043  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:40.582189  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:40.819150  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:40.979733  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:41.081822  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:41.284675  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:41.318611  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:41.475350  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:41.581870  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:41.817872  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:41.975624  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:42.082150  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:42.319800  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:42.479033  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:42.583338  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:42.819134  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:42.978046  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:43.083708  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:43.318837  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:43.476705  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:43.582056  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:43.785109  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:43.818104  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:43.976109  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:44.081416  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:44.318991  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:44.476151  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:44.596289  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:44.819051  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:44.976616  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:45.081745  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:45.318842  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:45.476739  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:45.582727  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:45.817867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:45.976600  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:46.082267  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:46.288414  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:46.319714  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:46.476643  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:46.582493  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:46.818948  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:46.977533  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:47.082182  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:47.318238  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:47.476983  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:47.583066  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:47.819252  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:47.978774  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:48.082507  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:48.318486  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:48.476123  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:48.583163  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:48.784677  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:48.822387  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:48.986510  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:49.086137  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:49.323706  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:49.481895  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:49.582564  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:49.819675  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:49.976031  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:50.082594  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:50.319558  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:50.478668  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:50.588098  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:50.788097  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:50.844238  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:50.976971  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:51.083864  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:51.319080  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:51.476545  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:51.581625  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:51.820026  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:51.986619  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:52.092476  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:52.319404  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:52.480622  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:52.588382  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:52.818422  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:52.976771  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:53.082063  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:53.286041  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:53.318561  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:53.476866  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:53.584944  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:53.818557  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:53.976619  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:54.081420  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:54.318813  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:54.475954  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:54.582481  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:54.818913  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:54.976100  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:55.082174  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:55.287305  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:55.318058  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:55.476320  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:55.582149  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:55.826567  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:55.981042  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:56.081276  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:56.319521  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:56.475650  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:56.581596  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:56.818574  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:56.975996  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:57.082643  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:57.626615  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:57.627586  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:57.627720  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:57.631472  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:57.818870  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:57.979364  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:58.081587  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:58.318085  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:58.476312  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:58.581156  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:58.826426  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:58.978242  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:59.081303  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:59.318911  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:59.478537  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:33:59.582057  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:33:59.785115  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:33:59.818776  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:33:59.980469  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 11:34:00.082381  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:00.319529  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:00.477985  384891 kapi.go:107] duration metric: took 1m28.006709237s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 11:34:00.581976  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:00.819378  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:01.082606  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:01.319729  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:01.582377  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:01.785853  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:34:01.819079  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:02.082352  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:02.318806  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:02.583133  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:02.819833  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:03.082070  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:03.319057  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:03.582749  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:03.818867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:04.081986  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:04.285341  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:34:04.318345  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:04.581902  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:04.818896  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:05.082540  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:05.319169  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:05.582754  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:05.818610  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:06.081323  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:06.286945  384891 pod_ready.go:103] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"False"
	I1007 11:34:06.319553  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:06.581733  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:06.819609  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:07.081656  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:07.288453  384891 pod_ready.go:93] pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace has status "Ready":"True"
	I1007 11:34:07.288493  384891 pod_ready.go:82] duration metric: took 1m36.010528889s for pod "metrics-server-84c5f94fbc-q6j6p" in "kube-system" namespace to be "Ready" ...
	I1007 11:34:07.288510  384891 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-8tqmv" in "kube-system" namespace to be "Ready" ...
	I1007 11:34:07.299285  384891 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-8tqmv" in "kube-system" namespace has status "Ready":"True"
	I1007 11:34:07.299313  384891 pod_ready.go:82] duration metric: took 10.79378ms for pod "nvidia-device-plugin-daemonset-8tqmv" in "kube-system" namespace to be "Ready" ...
	I1007 11:34:07.299332  384891 pod_ready.go:39] duration metric: took 1m37.211435839s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 11:34:07.299353  384891 api_server.go:52] waiting for apiserver process to appear ...
	I1007 11:34:07.299401  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:34:07.299455  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:34:07.321320  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:07.350199  384891 cri.go:89] found id: "c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:07.350228  384891 cri.go:89] found id: ""
	I1007 11:34:07.350239  384891 logs.go:282] 1 containers: [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8]
	I1007 11:34:07.350311  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.355340  384891 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:34:07.355425  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:34:07.403255  384891 cri.go:89] found id: "1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:07.403284  384891 cri.go:89] found id: ""
	I1007 11:34:07.403293  384891 logs.go:282] 1 containers: [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4]
	I1007 11:34:07.403356  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.408181  384891 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:34:07.408259  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:34:07.456781  384891 cri.go:89] found id: "0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:07.456810  384891 cri.go:89] found id: ""
	I1007 11:34:07.456821  384891 logs.go:282] 1 containers: [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965]
	I1007 11:34:07.456880  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.461365  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:34:07.461432  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:34:07.503869  384891 cri.go:89] found id: "c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:07.503900  384891 cri.go:89] found id: ""
	I1007 11:34:07.503911  384891 logs.go:282] 1 containers: [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a]
	I1007 11:34:07.503986  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.508824  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:34:07.508912  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:34:07.553417  384891 cri.go:89] found id: "07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:07.553445  384891 cri.go:89] found id: ""
	I1007 11:34:07.553453  384891 logs.go:282] 1 containers: [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e]
	I1007 11:34:07.553507  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.558607  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:34:07.558691  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:34:07.582482  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:07.609104  384891 cri.go:89] found id: "8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:07.609133  384891 cri.go:89] found id: ""
	I1007 11:34:07.609143  384891 logs.go:282] 1 containers: [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae]
	I1007 11:34:07.609209  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:07.614014  384891 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:34:07.614095  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:34:07.669307  384891 cri.go:89] found id: ""
	I1007 11:34:07.669339  384891 logs.go:282] 0 containers: []
	W1007 11:34:07.669348  384891 logs.go:284] No container was found matching "kindnet"
	I1007 11:34:07.669360  384891 logs.go:123] Gathering logs for dmesg ...
	I1007 11:34:07.669374  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:34:07.692510  384891 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:34:07.692553  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:34:07.820538  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:07.833306  384891 logs.go:123] Gathering logs for kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] ...
	I1007 11:34:07.833344  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:07.881834  384891 logs.go:123] Gathering logs for kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] ...
	I1007 11:34:07.881872  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:07.922102  384891 logs.go:123] Gathering logs for kubelet ...
	I1007 11:34:07.922135  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 11:34:07.994930  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:07.995159  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:08.014966  384891 logs.go:123] Gathering logs for coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] ...
	I1007 11:34:08.015007  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:08.059810  384891 logs.go:123] Gathering logs for kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] ...
	I1007 11:34:08.059846  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:08.082446  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:08.118806  384891 logs.go:123] Gathering logs for kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] ...
	I1007 11:34:08.118857  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:08.183364  384891 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:34:08.183410  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:34:08.319460  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:08.583736  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:08.819563  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:08.851907  384891 logs.go:123] Gathering logs for container status ...
	I1007 11:34:08.851975  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:34:08.905544  384891 logs.go:123] Gathering logs for etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] ...
	I1007 11:34:08.905576  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:08.973774  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:08.973822  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 11:34:08.973898  384891 out.go:270] X Problems detected in kubelet:
	W1007 11:34:08.973917  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:08.973935  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:08.973949  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:08.973962  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:34:09.082037  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:09.319301  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:09.582172  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:09.818720  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:10.083461  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:10.318771  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:10.582330  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:10.819089  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:11.081911  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:11.321748  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:11.581492  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:11.818375  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:12.082063  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:12.319965  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:12.582369  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:12.819383  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:13.082206  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:13.318240  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:13.583364  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:13.818316  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:14.081551  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:14.318945  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:14.581789  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:14.819411  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:15.081875  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:15.318853  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:15.582528  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:15.818834  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:16.081977  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:16.318787  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:16.582509  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:16.818784  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:17.082467  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:17.319180  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:17.583829  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:17.819020  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:18.083259  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:18.318588  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:18.585693  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:18.818464  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:18.975488  384891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 11:34:18.998847  384891 api_server.go:72] duration metric: took 1m57.962235499s to wait for apiserver process to appear ...
	I1007 11:34:18.998888  384891 api_server.go:88] waiting for apiserver healthz status ...
	I1007 11:34:18.998936  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:34:18.999018  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:34:19.040445  384891 cri.go:89] found id: "c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:19.040469  384891 cri.go:89] found id: ""
	I1007 11:34:19.040485  384891 logs.go:282] 1 containers: [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8]
	I1007 11:34:19.040551  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.046554  384891 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:34:19.046621  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:34:19.082671  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:19.092133  384891 cri.go:89] found id: "1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:19.092166  384891 cri.go:89] found id: ""
	I1007 11:34:19.092176  384891 logs.go:282] 1 containers: [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4]
	I1007 11:34:19.092241  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.096808  384891 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:34:19.096908  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:34:19.138989  384891 cri.go:89] found id: "0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:19.139023  384891 cri.go:89] found id: ""
	I1007 11:34:19.139035  384891 logs.go:282] 1 containers: [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965]
	I1007 11:34:19.139100  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.143619  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:34:19.143693  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:34:19.191484  384891 cri.go:89] found id: "c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:19.191512  384891 cri.go:89] found id: ""
	I1007 11:34:19.191523  384891 logs.go:282] 1 containers: [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a]
	I1007 11:34:19.191676  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.196448  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:34:19.196521  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:34:19.242455  384891 cri.go:89] found id: "07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:19.242492  384891 cri.go:89] found id: ""
	I1007 11:34:19.242503  384891 logs.go:282] 1 containers: [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e]
	I1007 11:34:19.242564  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.248534  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:34:19.248629  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:34:19.291085  384891 cri.go:89] found id: "8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:19.291114  384891 cri.go:89] found id: ""
	I1007 11:34:19.291124  384891 logs.go:282] 1 containers: [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae]
	I1007 11:34:19.291194  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:19.295722  384891 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:34:19.295810  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:34:19.318088  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:19.340630  384891 cri.go:89] found id: ""
	I1007 11:34:19.340658  384891 logs.go:282] 0 containers: []
	W1007 11:34:19.340668  384891 logs.go:284] No container was found matching "kindnet"
	I1007 11:34:19.340678  384891 logs.go:123] Gathering logs for kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] ...
	I1007 11:34:19.340701  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:19.398366  384891 logs.go:123] Gathering logs for kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] ...
	I1007 11:34:19.398413  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:19.441039  384891 logs.go:123] Gathering logs for kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] ...
	I1007 11:34:19.441071  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:19.515511  384891 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:34:19.515559  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:34:19.581392  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:19.820008  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:20.082996  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:20.318698  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:20.371437  384891 logs.go:123] Gathering logs for kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] ...
	I1007 11:34:20.371566  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:20.421572  384891 logs.go:123] Gathering logs for container status ...
	I1007 11:34:20.421622  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:34:20.473855  384891 logs.go:123] Gathering logs for kubelet ...
	I1007 11:34:20.473898  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 11:34:20.539155  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:20.539346  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:20.560434  384891 logs.go:123] Gathering logs for dmesg ...
	I1007 11:34:20.560477  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:34:20.578609  384891 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:34:20.578644  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:34:20.582162  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:20.705740  384891 logs.go:123] Gathering logs for etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] ...
	I1007 11:34:20.705772  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:20.771436  384891 logs.go:123] Gathering logs for coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] ...
	I1007 11:34:20.771482  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:20.817335  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:20.817370  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 11:34:20.817442  384891 out.go:270] X Problems detected in kubelet:
	W1007 11:34:20.817457  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:20.817470  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:20.817479  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:20.817488  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:34:20.818512  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:21.082056  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:21.318867  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:21.582262  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:21.818795  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:22.083232  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:22.318990  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:22.582413  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:22.819076  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:23.082537  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:23.318303  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:23.583644  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:23.818519  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:24.081687  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:24.318430  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:24.582120  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:24.819111  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:25.086365  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:25.320747  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:25.582278  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:25.819707  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:26.082436  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:26.319403  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:26.582434  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:26.819099  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:27.082857  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:27.318289  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:27.581568  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:27.819777  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:28.081999  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:28.318751  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:28.582679  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:28.818757  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:29.082323  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:29.318830  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:29.582031  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:29.818723  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:30.082134  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:30.319885  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:30.581940  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:30.818806  384891 api_server.go:253] Checking apiserver healthz at https://192.168.39.141:8443/healthz ...
	I1007 11:34:30.824530  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:30.825860  384891 api_server.go:279] https://192.168.39.141:8443/healthz returned 200:
	ok
	I1007 11:34:30.826750  384891 api_server.go:141] control plane version: v1.31.1
	I1007 11:34:30.826782  384891 api_server.go:131] duration metric: took 11.827885179s to wait for apiserver health ...
	I1007 11:34:30.826793  384891 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 11:34:30.826818  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 11:34:30.826869  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 11:34:30.868009  384891 cri.go:89] found id: "c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:30.868043  384891 cri.go:89] found id: ""
	I1007 11:34:30.868054  384891 logs.go:282] 1 containers: [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8]
	I1007 11:34:30.868116  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:30.872897  384891 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 11:34:30.872982  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 11:34:30.921766  384891 cri.go:89] found id: "1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:30.921797  384891 cri.go:89] found id: ""
	I1007 11:34:30.921807  384891 logs.go:282] 1 containers: [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4]
	I1007 11:34:30.921872  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:30.926658  384891 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 11:34:30.926751  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 11:34:30.967084  384891 cri.go:89] found id: "0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:30.967110  384891 cri.go:89] found id: ""
	I1007 11:34:30.967121  384891 logs.go:282] 1 containers: [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965]
	I1007 11:34:30.967184  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:30.971720  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 11:34:30.971806  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 11:34:31.014014  384891 cri.go:89] found id: "c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:31.014051  384891 cri.go:89] found id: ""
	I1007 11:34:31.014063  384891 logs.go:282] 1 containers: [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a]
	I1007 11:34:31.014128  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:31.019324  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 11:34:31.019476  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 11:34:31.061685  384891 cri.go:89] found id: "07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:31.061719  384891 cri.go:89] found id: ""
	I1007 11:34:31.061730  384891 logs.go:282] 1 containers: [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e]
	I1007 11:34:31.061791  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:31.066589  384891 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 11:34:31.066673  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 11:34:31.081745  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:31.112923  384891 cri.go:89] found id: "8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:31.112948  384891 cri.go:89] found id: ""
	I1007 11:34:31.112957  384891 logs.go:282] 1 containers: [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae]
	I1007 11:34:31.113010  384891 ssh_runner.go:195] Run: which crictl
	I1007 11:34:31.118016  384891 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 11:34:31.118089  384891 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 11:34:31.171358  384891 cri.go:89] found id: ""
	I1007 11:34:31.171390  384891 logs.go:282] 0 containers: []
	W1007 11:34:31.171402  384891 logs.go:284] No container was found matching "kindnet"
	I1007 11:34:31.171415  384891 logs.go:123] Gathering logs for describe nodes ...
	I1007 11:34:31.171439  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 11:34:31.307909  384891 logs.go:123] Gathering logs for kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] ...
	I1007 11:34:31.307947  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8"
	I1007 11:34:31.318066  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:31.370102  384891 logs.go:123] Gathering logs for coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] ...
	I1007 11:34:31.370145  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965"
	I1007 11:34:31.412898  384891 logs.go:123] Gathering logs for kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] ...
	I1007 11:34:31.412929  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e"
	I1007 11:34:31.455361  384891 logs.go:123] Gathering logs for kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] ...
	I1007 11:34:31.455399  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae"
	I1007 11:34:31.525681  384891 logs.go:123] Gathering logs for container status ...
	I1007 11:34:31.525726  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 11:34:31.581299  384891 logs.go:123] Gathering logs for kubelet ...
	I1007 11:34:31.581352  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 11:34:31.582018  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1007 11:34:31.650024  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:31.650226  384891 logs.go:138] Found kubelet problem: Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:31.671782  384891 logs.go:123] Gathering logs for dmesg ...
	I1007 11:34:31.671817  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 11:34:31.692198  384891 logs.go:123] Gathering logs for etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] ...
	I1007 11:34:31.692235  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4"
	I1007 11:34:31.760832  384891 logs.go:123] Gathering logs for kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] ...
	I1007 11:34:31.760880  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a"
	I1007 11:34:31.809091  384891 logs.go:123] Gathering logs for CRI-O ...
	I1007 11:34:31.809129  384891 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1007 11:34:31.818667  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:32.083426  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:32.318110  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:32.582254  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:32.686330  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:32.686374  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 11:34:32.686450  384891 out.go:270] X Problems detected in kubelet:
	W1007 11:34:32.686461  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: W1007 11:32:34.234872    1196 reflector.go:561] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-246818" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-246818' and this object
	W1007 11:34:32.686473  384891 out.go:270]   Oct 07 11:32:34 addons-246818 kubelet[1196]: E1007 11:32:34.235003    1196 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"gcp-auth-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth-certs\" is forbidden: User \"system:node:addons-246818\" cannot list resource \"secrets\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-246818' and this object" logger="UnhandledError"
	I1007 11:34:32.686481  384891 out.go:358] Setting ErrFile to fd 2...
	I1007 11:34:32.686488  384891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:34:32.820112  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:33.082098  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:33.319357  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:33.583417  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:33.819012  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:34.082102  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:34.318854  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:34.582183  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:34.819365  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:35.082034  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:35.318900  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:35.582595  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:35.819015  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:36.081981  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:36.319063  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:36.582084  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:36.818989  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:37.082637  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:37.318307  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:37.582037  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:37.819608  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:38.082058  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:38.319071  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:38.582896  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:38.818216  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:39.082926  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:39.318258  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:39.582671  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:39.819037  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:40.082183  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:40.319106  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:40.582450  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:40.818611  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:41.082311  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:41.319060  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:41.582150  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:41.819047  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:42.081964  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:42.318809  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:42.582264  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:42.694665  384891 system_pods.go:59] 17 kube-system pods found
	I1007 11:34:42.694702  384891 system_pods.go:61] "coredns-7c65d6cfc9-9n6rn" [a65cd5da-6560-4c5a-9311-ca855450e9a9] Running
	I1007 11:34:42.694707  384891 system_pods.go:61] "csi-hostpath-attacher-0" [91820122-4ed3-4251-b1fd-f63756f7e814] Running
	I1007 11:34:42.694711  384891 system_pods.go:61] "csi-hostpath-resizer-0" [2a120d65-04bc-42e4-b324-49d7300d4ed8] Running
	I1007 11:34:42.694716  384891 system_pods.go:61] "csi-hostpathplugin-d8rpq" [52c9f352-e70d-47a1-907f-b13d53f6bc60] Running
	I1007 11:34:42.694719  384891 system_pods.go:61] "etcd-addons-246818" [bb627733-dff2-491c-8308-3ac74e5903dc] Running
	I1007 11:34:42.694723  384891 system_pods.go:61] "kube-apiserver-addons-246818" [e9c4665f-2478-4c1f-9cbf-0619491257dd] Running
	I1007 11:34:42.694726  384891 system_pods.go:61] "kube-controller-manager-addons-246818" [5c61899b-9f40-4b5d-b0ab-a796a3c1c8ba] Running
	I1007 11:34:42.694730  384891 system_pods.go:61] "kube-ingress-dns-minikube" [830d0746-7b01-4a11-a0ee-8f9298e96c17] Running
	I1007 11:34:42.694733  384891 system_pods.go:61] "kube-proxy-l8kql" [847b99db-d42a-483a-87e5-f70b492c2430] Running
	I1007 11:34:42.694738  384891 system_pods.go:61] "kube-scheduler-addons-246818" [1fbb2a15-cc03-4580-94f0-5afee1897222] Running
	I1007 11:34:42.694741  384891 system_pods.go:61] "metrics-server-84c5f94fbc-q6j6p" [f37e3b43-4ce4-4879-babb-e6efdf0f3163] Running
	I1007 11:34:42.694746  384891 system_pods.go:61] "nvidia-device-plugin-daemonset-8tqmv" [69715854-4ded-41a3-83c7-1c8c927935d3] Running
	I1007 11:34:42.694749  384891 system_pods.go:61] "registry-66c9cd494c-pdbhh" [0abb32c0-d3dc-447d-a3b9-d672a6f088ff] Running
	I1007 11:34:42.694752  384891 system_pods.go:61] "registry-proxy-nczxq" [f47e8fd0-0149-4ade-8c43-90e4eeb9b7cf] Running
	I1007 11:34:42.694756  384891 system_pods.go:61] "snapshot-controller-56fcc65765-q9hxr" [189d7791-dda8-49aa-b59d-36fdbc31d559] Running
	I1007 11:34:42.694759  384891 system_pods.go:61] "snapshot-controller-56fcc65765-q9tkd" [1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91] Running
	I1007 11:34:42.694763  384891 system_pods.go:61] "storage-provisioner" [2f27f3bc-8533-41d5-b82e-373f84b67952] Running
	I1007 11:34:42.694769  384891 system_pods.go:74] duration metric: took 11.867969785s to wait for pod list to return data ...
	I1007 11:34:42.694780  384891 default_sa.go:34] waiting for default service account to be created ...
	I1007 11:34:42.697608  384891 default_sa.go:45] found service account: "default"
	I1007 11:34:42.697642  384891 default_sa.go:55] duration metric: took 2.852196ms for default service account to be created ...
	I1007 11:34:42.697656  384891 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 11:34:42.706719  384891 system_pods.go:86] 17 kube-system pods found
	I1007 11:34:42.706756  384891 system_pods.go:89] "coredns-7c65d6cfc9-9n6rn" [a65cd5da-6560-4c5a-9311-ca855450e9a9] Running
	I1007 11:34:42.706762  384891 system_pods.go:89] "csi-hostpath-attacher-0" [91820122-4ed3-4251-b1fd-f63756f7e814] Running
	I1007 11:34:42.706766  384891 system_pods.go:89] "csi-hostpath-resizer-0" [2a120d65-04bc-42e4-b324-49d7300d4ed8] Running
	I1007 11:34:42.706770  384891 system_pods.go:89] "csi-hostpathplugin-d8rpq" [52c9f352-e70d-47a1-907f-b13d53f6bc60] Running
	I1007 11:34:42.706774  384891 system_pods.go:89] "etcd-addons-246818" [bb627733-dff2-491c-8308-3ac74e5903dc] Running
	I1007 11:34:42.706778  384891 system_pods.go:89] "kube-apiserver-addons-246818" [e9c4665f-2478-4c1f-9cbf-0619491257dd] Running
	I1007 11:34:42.706782  384891 system_pods.go:89] "kube-controller-manager-addons-246818" [5c61899b-9f40-4b5d-b0ab-a796a3c1c8ba] Running
	I1007 11:34:42.706788  384891 system_pods.go:89] "kube-ingress-dns-minikube" [830d0746-7b01-4a11-a0ee-8f9298e96c17] Running
	I1007 11:34:42.706791  384891 system_pods.go:89] "kube-proxy-l8kql" [847b99db-d42a-483a-87e5-f70b492c2430] Running
	I1007 11:34:42.706795  384891 system_pods.go:89] "kube-scheduler-addons-246818" [1fbb2a15-cc03-4580-94f0-5afee1897222] Running
	I1007 11:34:42.706800  384891 system_pods.go:89] "metrics-server-84c5f94fbc-q6j6p" [f37e3b43-4ce4-4879-babb-e6efdf0f3163] Running
	I1007 11:34:42.706805  384891 system_pods.go:89] "nvidia-device-plugin-daemonset-8tqmv" [69715854-4ded-41a3-83c7-1c8c927935d3] Running
	I1007 11:34:42.706808  384891 system_pods.go:89] "registry-66c9cd494c-pdbhh" [0abb32c0-d3dc-447d-a3b9-d672a6f088ff] Running
	I1007 11:34:42.706812  384891 system_pods.go:89] "registry-proxy-nczxq" [f47e8fd0-0149-4ade-8c43-90e4eeb9b7cf] Running
	I1007 11:34:42.706815  384891 system_pods.go:89] "snapshot-controller-56fcc65765-q9hxr" [189d7791-dda8-49aa-b59d-36fdbc31d559] Running
	I1007 11:34:42.706819  384891 system_pods.go:89] "snapshot-controller-56fcc65765-q9tkd" [1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91] Running
	I1007 11:34:42.706823  384891 system_pods.go:89] "storage-provisioner" [2f27f3bc-8533-41d5-b82e-373f84b67952] Running
	I1007 11:34:42.706835  384891 system_pods.go:126] duration metric: took 9.170306ms to wait for k8s-apps to be running ...
	I1007 11:34:42.706847  384891 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 11:34:42.706901  384891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 11:34:42.725146  384891 system_svc.go:56] duration metric: took 18.286276ms WaitForService to wait for kubelet
	I1007 11:34:42.725182  384891 kubeadm.go:582] duration metric: took 2m21.688585174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 11:34:42.725203  384891 node_conditions.go:102] verifying NodePressure condition ...
	I1007 11:34:42.728139  384891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 11:34:42.728194  384891 node_conditions.go:123] node cpu capacity is 2
	I1007 11:34:42.728211  384891 node_conditions.go:105] duration metric: took 3.001618ms to run NodePressure ...
	I1007 11:34:42.728226  384891 start.go:241] waiting for startup goroutines ...
	I1007 11:34:42.819517  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:43.082232  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:43.319050  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:43.582210  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:43.819348  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:44.081779  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:44.318592  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:44.581627  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:44.818069  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:45.082710  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:45.319371  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:45.581377  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:45.818428  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:46.083012  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:46.320632  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:46.581260  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:46.819209  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:47.082692  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:47.318983  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:47.582357  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:47.823398  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:48.082344  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:48.318267  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:48.581439  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:48.820231  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:49.082123  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:49.318989  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:49.582868  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:49.820088  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:50.084119  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:50.318944  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:50.581942  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:50.818634  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:51.082987  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:51.319771  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:51.582116  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:51.819251  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:52.082449  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:52.318176  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:52.582176  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:52.819387  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:53.081651  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:53.319024  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:53.582594  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:53.819107  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:54.082146  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:54.318787  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:54.582627  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:54.818201  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:55.204294  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:55.319426  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:55.583686  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:55.819569  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:56.082731  384891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 11:34:56.318631  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:56.581113  384891 kapi.go:107] duration metric: took 2m26.503967901s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 11:34:56.819419  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:57.319107  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:57.818908  384891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 11:34:58.322546  384891 kapi.go:107] duration metric: took 2m24.007812557s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 11:34:58.323908  384891 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-246818 cluster.
	I1007 11:34:58.325270  384891 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 11:34:58.326576  384891 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 11:34:58.328149  384891 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, nvidia-device-plugin, metrics-server, inspektor-gadget, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1007 11:34:58.329558  384891 addons.go:510] duration metric: took 2m37.292909623s for enable addons: enabled=[ingress-dns storage-provisioner nvidia-device-plugin metrics-server inspektor-gadget cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1007 11:34:58.329605  384891 start.go:246] waiting for cluster config update ...
	I1007 11:34:58.329625  384891 start.go:255] writing updated cluster config ...
	I1007 11:34:58.329888  384891 ssh_runner.go:195] Run: rm -f paused
	I1007 11:34:58.382842  384891 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 11:34:58.384942  384891 out.go:177] * Done! kubectl is now configured to use "addons-246818" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.836790560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301777836758744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eac086d1-308a-4779-8983-f75bd5a0d01f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.837617157Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7bb587d7-6fe5-4798-a95f-323ab5f31117 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.837795214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7bb587d7-6fe5-4798-a95f-323ab5f31117 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.838210245Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebc0ebd2dc5ea489727702fe5287176a8dfee72d4a838bf924a405e1bc8d5263,PodSandboxId:98d35412f9c27800e5c40501f3ede13c5e838a76ca75f3909c983f43d9e91aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728301586316602787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d331845-59f4-4092-938c-97591d81951b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907
f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
2c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17283008317275923
99,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe117735565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:
6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631
e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a782c114112ef
6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodSandboxId:4af
52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8,PodSan
dboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxId:660fb1dd2d
72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99fa76880ed25b
bc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7bb587d7-6fe5-4798-a95f-323ab5f31117 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.881318322Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87758c61-0f8e-4490-b7fc-2a7029dcb635 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.881395480Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87758c61-0f8e-4490-b7fc-2a7029dcb635 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.883562478Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3bb7716-ba3d-4af7-863d-b7a3e95ce6ea name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.884628351Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301777884601580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3bb7716-ba3d-4af7-863d-b7a3e95ce6ea name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.885865451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc3b6e8e-73fc-44c1-875b-ce25e607185d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.885923020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc3b6e8e-73fc-44c1-875b-ce25e607185d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.886364682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebc0ebd2dc5ea489727702fe5287176a8dfee72d4a838bf924a405e1bc8d5263,PodSandboxId:98d35412f9c27800e5c40501f3ede13c5e838a76ca75f3909c983f43d9e91aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728301586316602787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d331845-59f4-4092-938c-97591d81951b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907
f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
2c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17283008317275923
99,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe117735565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:
6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631
e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a782c114112ef
6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodSandboxId:4af
52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8,PodSan
dboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxId:660fb1dd2d
72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99fa76880ed25b
bc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc3b6e8e-73fc-44c1-875b-ce25e607185d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.929575567Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b2d3edd-6821-4313-9c5a-0a88cd65fcb6 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.929653617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b2d3edd-6821-4313-9c5a-0a88cd65fcb6 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.931458840Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52a57318-4b8a-4407-92e3-3087934bf109 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.932997110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301777932962132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52a57318-4b8a-4407-92e3-3087934bf109 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.933958656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d47b70e-e323-435c-96fb-a927c02c4439 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.934017360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d47b70e-e323-435c-96fb-a927c02c4439 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.934492832Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebc0ebd2dc5ea489727702fe5287176a8dfee72d4a838bf924a405e1bc8d5263,PodSandboxId:98d35412f9c27800e5c40501f3ede13c5e838a76ca75f3909c983f43d9e91aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728301586316602787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d331845-59f4-4092-938c-97591d81951b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907
f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
2c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17283008317275923
99,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe117735565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:
6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631
e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a782c114112ef
6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodSandboxId:4af
52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8,PodSan
dboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxId:660fb1dd2d
72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99fa76880ed25b
bc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d47b70e-e323-435c-96fb-a927c02c4439 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.971077604Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=357b098e-711d-40e6-98b8-5cf9b6371022 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.971150639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=357b098e-711d-40e6-98b8-5cf9b6371022 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.973090107Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf2fcfb9-c8bf-4596-b9db-fe8071943878 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.974400517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301777974367557,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf2fcfb9-c8bf-4596-b9db-fe8071943878 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.975104842Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96afc57e-96bf-4cdd-9b11-5afa344008e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.975166302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96afc57e-96bf-4cdd-9b11-5afa344008e6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:49:37 addons-246818 crio[659]: time="2024-10-07 11:49:37.975586988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ebc0ebd2dc5ea489727702fe5287176a8dfee72d4a838bf924a405e1bc8d5263,PodSandboxId:98d35412f9c27800e5c40501f3ede13c5e838a76ca75f3909c983f43d9e91aa7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728301586316602787,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9d331845-59f4-4092-938c-97591d81951b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:018072193f0f90e27c1a83edde9202b837960d29dc7d9b47ee95fba68c8b5766,PodSandboxId:d49de85842a0d4d28fa2bafd574fc6c9361bec2bcdf837ea2be80cc5d91884b3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728301415074105134,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d86b2c09-e064-4560-be78-a763c6b35ac1,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57828bc9be9d579df8bed89571f406811f1ffb1f00dc2bc8652b8a2f22be516f,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1728300839567019597,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 9a80f5e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47756e0237323f9107b9525bb03fa3f36032675ecaabd0071682994edcb08306,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1728300837689959622,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 743e34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55b8cd6e90ea469d957d48a462ac9feaa824b734736bed29bec57622041b9c5a,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1728300835703417272,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907
f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870a7af54cbdca29f410c5811fe1021db9e60636a4fdcb0e1b9fcf2a4b6564ca,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1728300834807769405,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
2c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d50c7be11d706498c39735db57c5a43ffe6b0d17c01e7261f0d94ed3ef9297ad,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1728300833238784672,Labels:map[string]string{io.kuber
netes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb3c49e5a57e06ed13276d64caff575e3ba27dd1e60b66a479758adb55a0cca3,PodSandboxId:8bd0ba34143b726def524cc7ab4502ed94f7d4a4867c8e94b5b8f268dbb31b5b,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:17283008317275923
99,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91820122-4ed3-4251-b1fd-f63756f7e814,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79ce9975927b1b6c26bf3fdd8146a4b05d2dcd41be2d739d76598ee22a5a2bc9,PodSandboxId:9bcf15f18f2a94815b6ef254c711081070388acf571e5e6d3e956386de4241b2,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba
9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1728300830168788299,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-d8rpq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52c9f352-e70d-47a1-907f-b13d53f6bc60,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea77c9e2ea78dab985adc77eb47fd7a7d2d76e547b0bc9bfb8772a6e8a8ef645,PodSandboxId:37fe00b1ba65875353277cf19749b53ba2c451438892c2008fa0f3cacfd7c48f,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1728300827913884067,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a120d65-04bc-42e4-b324-49d7300d4ed8,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72f67a14ad8105fcf5d82c2d80d562f7a4488db968fbb542eef5ee1fd19e60e0,PodSandboxId:b45a2edd29772432bded77a3f7733ad1e86026ab221f340da6e9ebfe18885934,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822315551627,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9tkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b4fd6e6-702e-4d96-8f01-4a2de4f9bc91,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4f852c16268c05c6f9c197ce53ed301b157ff8c6399c0ffa26b34537002dd4d,PodSandboxId:ad1976920b5444987b4c4eaefc3a88eedb1f002e28b3ddc58e405793608b6349,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079
b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1728300822196090957,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-q9hxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 189d7791-dda8-49aa-b59d-36fdbc31d559,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64b3fe56b0b4d3fe117735565f2a0aeab451e5355bb33873142df1501d850d77,PodSandboxId:4d66856d952939677f8b9255f514901def5e802b0c5bd4d7ca51745ade3fa789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:
6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728300747843687344,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f27f3bc-8533-41d5-b82e-373f84b67952,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965,PodSandboxId:81ad4b72c15e57467b7e0d391cdb6365298b9a08cf781667c999c1d4cd222a38,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631
e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728300744883776825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9n6rn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a65cd5da-6560-4c5a-9311-ca855450e9a9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07021166cf32e5864494e63e36e1e36cb43a782c114112ef
6169d09c055ec11e,PodSandboxId:946e3367f9d80bdfc822dbfbc31d440fb396ffca5490887a2a0ae50a08d89063,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728300742335630070,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l8kql,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 847b99db-d42a-483a-87e5-f70b492c2430,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae,PodSandboxId:4af
52b2553e39a37dd90202fa74cac21612cde19065c9beca74a5bc9f080307a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728300731211021096,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f17cf77c78c1b593584efb40709f32a,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8,PodSan
dboxId:9eda8e53f6a534e2ce534de13c67a401179716fb0c22b2cd4ccffb8c7ec68234,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728300731203478554,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a756da3ed92ee145f2f5d2ebafbcd2c,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a,PodSandboxId:660fb1dd2d
72344c8ebb0ee693548641ef7d9d6c11f4ffd8479adcd22cc248a8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728300731224209443,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 374eb5896a5b2a3f0cd3c0c0d7763afa,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4,PodSandboxId:d314e18e8281d99fa76880ed25b
bc377f181865f6a56d3ffbfe83518d177f5a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728300731206474345,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-246818,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f840631d8eb4dc60d684d9191f1d6c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96afc57e-96bf-4cdd-9b11-5afa344008e6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	ebc0ebd2dc5ea       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          3 minutes ago       Running             busybox                                  0                   98d35412f9c27       busybox
	018072193f0f9       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                                              6 minutes ago       Running             nginx                                    0                   d49de85842a0d       nginx
	57828bc9be9d5       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          15 minutes ago      Running             csi-snapshotter                          0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	47756e0237323       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          15 minutes ago      Running             csi-provisioner                          0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	55b8cd6e90ea4       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            15 minutes ago      Running             liveness-probe                           0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	870a7af54cbdc       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           15 minutes ago      Running             hostpath                                 0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	d50c7be11d706       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                15 minutes ago      Running             node-driver-registrar                    0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	cb3c49e5a57e0       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             15 minutes ago      Running             csi-attacher                             0                   8bd0ba34143b7       csi-hostpath-attacher-0
	79ce9975927b1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   15 minutes ago      Running             csi-external-health-monitor-controller   0                   9bcf15f18f2a9       csi-hostpathplugin-d8rpq
	ea77c9e2ea78d       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              15 minutes ago      Running             csi-resizer                              0                   37fe00b1ba658       csi-hostpath-resizer-0
	72f67a14ad810       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      15 minutes ago      Running             volume-snapshot-controller               0                   b45a2edd29772       snapshot-controller-56fcc65765-q9tkd
	d4f852c16268c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      15 minutes ago      Running             volume-snapshot-controller               0                   ad1976920b544       snapshot-controller-56fcc65765-q9hxr
	64b3fe56b0b4d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             17 minutes ago      Running             storage-provisioner                      0                   4d66856d95293       storage-provisioner
	0282c1110abcf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             17 minutes ago      Running             coredns                                  0                   81ad4b72c15e5       coredns-7c65d6cfc9-9n6rn
	07021166cf32e       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             17 minutes ago      Running             kube-proxy                               0                   946e3367f9d80       kube-proxy-l8kql
	c89d7f8df3494       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             17 minutes ago      Running             kube-scheduler                           0                   660fb1dd2d723       kube-scheduler-addons-246818
	8f63af3616abb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             17 minutes ago      Running             kube-controller-manager                  0                   4af52b2553e39       kube-controller-manager-addons-246818
	1c2b9ede2bcb3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             17 minutes ago      Running             etcd                                     0                   d314e18e8281d       etcd-addons-246818
	c555e8eeff012       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             17 minutes ago      Running             kube-apiserver                           0                   9eda8e53f6a53       kube-apiserver-addons-246818
	
	
	==> coredns [0282c1110abcf1ee192b5c36d30dcb626cb7285e261ba8570b181d2fd90a6965] <==
	[INFO] 10.244.0.20:35979 - 50929 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000094399s
	[INFO] 10.244.0.20:35979 - 42029 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065933s
	[INFO] 10.244.0.20:35979 - 25183 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057751s
	[INFO] 10.244.0.20:35979 - 54907 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000095463s
	[INFO] 10.244.0.20:34909 - 60733 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000120477s
	[INFO] 10.244.0.20:34909 - 42487 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000068621s
	[INFO] 10.244.0.20:34909 - 31874 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057394s
	[INFO] 10.244.0.20:34909 - 13788 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054117s
	[INFO] 10.244.0.20:34909 - 6549 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051197s
	[INFO] 10.244.0.20:34909 - 4644 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000064603s
	[INFO] 10.244.0.20:34909 - 56885 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058973s
	[INFO] 10.244.0.20:57201 - 16169 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000178388s
	[INFO] 10.244.0.20:59552 - 54214 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000188326s
	[INFO] 10.244.0.20:59552 - 7076 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066726s
	[INFO] 10.244.0.20:57201 - 48302 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053174s
	[INFO] 10.244.0.20:57201 - 24270 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042552s
	[INFO] 10.244.0.20:59552 - 29538 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000082299s
	[INFO] 10.244.0.20:59552 - 36425 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000192845s
	[INFO] 10.244.0.20:59552 - 53723 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000349523s
	[INFO] 10.244.0.20:57201 - 43093 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000092543s
	[INFO] 10.244.0.20:57201 - 60283 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00026043s
	[INFO] 10.244.0.20:59552 - 65427 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000100959s
	[INFO] 10.244.0.20:59552 - 6694 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000188822s
	[INFO] 10.244.0.20:57201 - 24145 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000109508s
	[INFO] 10.244.0.20:57201 - 8067 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000089735s
	
	
	==> describe nodes <==
	Name:               addons-246818
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-246818
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=addons-246818
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T11_32_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-246818
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-246818"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 11:32:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-246818
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 11:49:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 11:46:52 +0000   Mon, 07 Oct 2024 11:32:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 11:46:52 +0000   Mon, 07 Oct 2024 11:32:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 11:46:52 +0000   Mon, 07 Oct 2024 11:32:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 11:46:52 +0000   Mon, 07 Oct 2024 11:32:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    addons-246818
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a7e71aa8d4d4e109baa99d216d2d35a
	  System UUID:                5a7e71aa-8d4d-4e10-9baa-99d216d2d35a
	  Boot ID:                    1e1e4db1-e3af-4cfb-96cf-4a407d094dcb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-69v2g                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m47s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-7c65d6cfc9-9n6rn                                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     17m
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 csi-hostpathplugin-d8rpq                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 etcd-addons-246818                                            100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         17m
	  kube-system                 kube-apiserver-addons-246818                                  250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-addons-246818                         200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-l8kql                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-addons-246818                                  100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-56fcc65765-q9hxr                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 snapshot-controller-56fcc65765-q9tkd                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  local-path-storage          helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node addons-246818 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node addons-246818 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node addons-246818 status is now: NodeHasSufficientPID
	  Normal  NodeReady                17m   kubelet          Node addons-246818 status is now: NodeReady
	  Normal  RegisteredNode           17m   node-controller  Node addons-246818 event: Registered Node addons-246818 in Controller
	
	
	==> dmesg <==
	[  +4.824342] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.804390] kauditd_printk_skb: 43 callbacks suppressed
	[  +5.058503] kauditd_printk_skb: 108 callbacks suppressed
	[  +5.053847] kauditd_printk_skb: 81 callbacks suppressed
	[  +6.458158] kauditd_printk_skb: 78 callbacks suppressed
	[  +8.783756] kauditd_printk_skb: 22 callbacks suppressed
	[Oct 7 11:33] kauditd_printk_skb: 32 callbacks suppressed
	[ +42.426579] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.667940] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.940260] kauditd_printk_skb: 38 callbacks suppressed
	[Oct 7 11:34] kauditd_printk_skb: 2 callbacks suppressed
	[ +48.225055] kauditd_printk_skb: 15 callbacks suppressed
	[Oct 7 11:35] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.972304] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 7 11:43] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.308875] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.325093] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.739676] kauditd_printk_skb: 48 callbacks suppressed
	[  +5.143132] kauditd_printk_skb: 20 callbacks suppressed
	[Oct 7 11:45] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.352080] kauditd_printk_skb: 31 callbacks suppressed
	[Oct 7 11:46] kauditd_printk_skb: 1 callbacks suppressed
	[ +17.190014] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 7 11:48] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 7 11:49] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [1c2b9ede2bcb361acc88ceef8c95069898d8113b1e6410298eda101ee78bf6c4] <==
	{"level":"info","ts":"2024-10-07T11:33:57.598168Z","caller":"traceutil/trace.go:171","msg":"trace[1610592843] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1069; }","duration":"332.016264ms","start":"2024-10-07T11:33:57.266146Z","end":"2024-10-07T11:33:57.598162Z","steps":["trace[1610592843] 'agreement among raft nodes before linearized reading'  (duration: 331.93248ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.598655Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.659891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-10-07T11:33:57.598711Z","caller":"traceutil/trace.go:171","msg":"trace[1734221806] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1069; }","duration":"138.717909ms","start":"2024-10-07T11:33:57.459985Z","end":"2024-10-07T11:33:57.598703Z","steps":["trace[1734221806] 'agreement among raft nodes before linearized reading'  (duration: 138.621511ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.598843Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.683257ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:33:57.598900Z","caller":"traceutil/trace.go:171","msg":"trace[1418508135] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"147.743392ms","start":"2024-10-07T11:33:57.451149Z","end":"2024-10-07T11:33:57.598892Z","steps":["trace[1418508135] 'agreement among raft nodes before linearized reading'  (duration: 147.663333ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.598872Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"304.22319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:33:57.598979Z","caller":"traceutil/trace.go:171","msg":"trace[1080542174] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"304.328661ms","start":"2024-10-07T11:33:57.294641Z","end":"2024-10-07T11:33:57.598970Z","steps":["trace[1080542174] 'agreement among raft nodes before linearized reading'  (duration: 304.214885ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:33:57.599028Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:33:57.294615Z","time spent":"304.404536ms","remote":"127.0.0.1:46982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-10-07T11:34:55.174206Z","caller":"traceutil/trace.go:171","msg":"trace[2115876705] linearizableReadLoop","detail":"{readStateIndex:1224; appliedIndex:1223; }","duration":"118.016178ms","start":"2024-10-07T11:34:55.056148Z","end":"2024-10-07T11:34:55.174164Z","steps":["trace[2115876705] 'read index received'  (duration: 117.833312ms)","trace[2115876705] 'applied index is now lower than readState.Index'  (duration: 181.97µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T11:34:55.174576Z","caller":"traceutil/trace.go:171","msg":"trace[695574193] transaction","detail":"{read_only:false; response_revision:1176; number_of_response:1; }","duration":"175.99018ms","start":"2024-10-07T11:34:54.998568Z","end":"2024-10-07T11:34:55.174558Z","steps":["trace[695574193] 'process raft request'  (duration: 175.463941ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:34:55.174726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.52903ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:34:55.175588Z","caller":"traceutil/trace.go:171","msg":"trace[1717354007] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1176; }","duration":"119.452051ms","start":"2024-10-07T11:34:55.056121Z","end":"2024-10-07T11:34:55.175573Z","steps":["trace[1717354007] 'agreement among raft nodes before linearized reading'  (duration: 118.512449ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T11:42:12.102784Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1443}
	{"level":"info","ts":"2024-10-07T11:42:12.139478Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1443,"took":"35.711987ms","hash":2488319999,"current-db-size-bytes":5902336,"current-db-size":"5.9 MB","current-db-size-in-use-bytes":2895872,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-10-07T11:42:12.139591Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2488319999,"revision":1443,"compact-revision":-1}
	{"level":"info","ts":"2024-10-07T11:43:18.110906Z","caller":"traceutil/trace.go:171","msg":"trace[325646834] linearizableReadLoop","detail":"{readStateIndex:2187; appliedIndex:2186; }","duration":"261.537214ms","start":"2024-10-07T11:43:17.849341Z","end":"2024-10-07T11:43:18.110878Z","steps":["trace[325646834] 'read index received'  (duration: 261.404239ms)","trace[325646834] 'applied index is now lower than readState.Index'  (duration: 132.582µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-07T11:43:18.111051Z","caller":"traceutil/trace.go:171","msg":"trace[977940061] transaction","detail":"{read_only:false; response_revision:2029; number_of_response:1; }","duration":"389.974345ms","start":"2024-10-07T11:43:17.721067Z","end":"2024-10-07T11:43:18.111041Z","steps":["trace[977940061] 'process raft request'  (duration: 389.72661ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:43:18.111247Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.449824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"warn","ts":"2024-10-07T11:43:18.111341Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-07T11:43:17.721046Z","time spent":"390.024254ms","remote":"127.0.0.1:47046","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-khdavvvmsdaoutnun36u7rbvlu\" mod_revision:1961 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-khdavvvmsdaoutnun36u7rbvlu\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-khdavvvmsdaoutnun36u7rbvlu\" > >"}
	{"level":"info","ts":"2024-10-07T11:43:18.111353Z","caller":"traceutil/trace.go:171","msg":"trace[2088660386] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2029; }","duration":"175.589035ms","start":"2024-10-07T11:43:17.935755Z","end":"2024-10-07T11:43:18.111344Z","steps":["trace[2088660386] 'agreement among raft nodes before linearized reading'  (duration: 175.35089ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:43:18.111578Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.227097ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:8"}
	{"level":"info","ts":"2024-10-07T11:43:18.111600Z","caller":"traceutil/trace.go:171","msg":"trace[668771085] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:2029; }","duration":"262.260378ms","start":"2024-10-07T11:43:17.849335Z","end":"2024-10-07T11:43:18.111595Z","steps":["trace[668771085] 'agreement among raft nodes before linearized reading'  (duration: 262.135923ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T11:47:12.110298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1859}
	{"level":"info","ts":"2024-10-07T11:47:12.131252Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1859,"took":"20.392946ms","hash":1211739089,"current-db-size-bytes":5902336,"current-db-size":"5.9 MB","current-db-size-in-use-bytes":4247552,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2024-10-07T11:47:12.131383Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1211739089,"revision":1859,"compact-revision":1443}
	
	
	==> kernel <==
	 11:49:38 up 17 min,  0 users,  load average: 0.17, 0.28, 0.35
	Linux addons-246818 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c555e8eeff0125162ff84aad73a86df5a3ea2da34e98ee9423cc3878224e02d8] <==
	E1007 11:34:12.210058       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.180.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.180.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.180.136:443: i/o timeout" logger="UnhandledError"
	I1007 11:34:12.229890       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1007 11:43:13.404446       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.123.192"}
	I1007 11:43:31.610761       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1007 11:43:31.793061       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.111.126"}
	I1007 11:43:35.415346       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1007 11:43:36.447558       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1007 11:45:52.143032       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.225.248"}
	E1007 11:48:56.779471       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:48:57.792910       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:48:58.801232       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:48:59.812543       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:00.819351       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:01.827078       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:02.836541       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:03.845605       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:04.854008       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:05.863935       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:06.878509       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:07.894682       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:08.902611       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:09.909934       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:10.781496       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1007 11:49:10.917405       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1007 11:49:13.288547       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [8f63af3616abb1e25698fb452260375d41cb229fb0f9751ed3dde2b7ce401eae] <==
	E1007 11:48:07.906063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1007 11:48:41.466077       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="13.727µs"
	I1007 11:48:43.540394       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="142.853µs"
	W1007 11:48:47.070569       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:48:47.070639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E1007 11:48:50.546791       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	I1007 11:48:54.543212       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="467.425µs"
	I1007 11:48:58.316351       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="6.167µs"
	E1007 11:49:05.547675       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	W1007 11:49:20.399831       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1007 11:49:20.399979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E1007 11:49:20.548674       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1007 11:49:24.056500       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1007 11:49:24.293808       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1007 11:49:24.420889       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1007 11:49:24.559973       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1007 11:49:24.709948       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1007 11:49:25.009930       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1007 11:49:25.338957       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1007 11:49:25.783040       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1007 11:49:26.568932       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1007 11:49:27.986849       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1007 11:49:30.666597       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	E1007 11:49:35.549565       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1007 11:49:35.906092       1 namespace_controller.go:164] "Unhandled Error" err="deletion of namespace local-path-storage failed: unexpected items still remain in namespace: local-path-storage for gvr: /v1, Resource=pods" logger="UnhandledError"
	
	
	==> kube-proxy [07021166cf32e5864494e63e36e1e36cb43a782c114112ef6169d09c055ec11e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 11:32:23.243441       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 11:32:23.257157       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.141"]
	E1007 11:32:23.257303       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:32:23.344187       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 11:32:23.344232       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 11:32:23.344291       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:32:23.348157       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:32:23.349642       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:32:23.349675       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:32:23.353061       1 config.go:199] "Starting service config controller"
	I1007 11:32:23.353107       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:32:23.353132       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:32:23.353136       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:32:23.353652       1 config.go:328] "Starting node config controller"
	I1007 11:32:23.353680       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:32:23.453423       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 11:32:23.453488       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:32:23.453719       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c89d7f8df34943b4d8d96df95927d7ab9c3d0d4fda16cb0dc336e6bba1ed331a] <==
	W1007 11:32:13.856022       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 11:32:13.856054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.719501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 11:32:14.719572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.721026       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 11:32:14.721098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.734053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 11:32:14.734189       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.747594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 11:32:14.747648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.853414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 11:32:14.853573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.943033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 11:32:14.943144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:14.979068       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 11:32:14.979173       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 11:32:15.003337       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 11:32:15.003472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:15.093807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 11:32:15.093884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:15.121824       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 11:32:15.121876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 11:32:15.145698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 11:32:15.145757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 11:32:17.639557       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 11:49:08 addons-246818 kubelet[1196]: I1007 11:49:08.525440    1196 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 11:49:11 addons-246818 kubelet[1196]: I1007 11:49:11.856599    1196 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgvtr\" (UniqueName: \"kubernetes.io/projected/061506d6-ef07-4852-b9f4-9c28e30da0be-kube-api-access-qgvtr\") pod \"061506d6-ef07-4852-b9f4-9c28e30da0be\" (UID: \"061506d6-ef07-4852-b9f4-9c28e30da0be\") "
	Oct 07 11:49:11 addons-246818 kubelet[1196]: I1007 11:49:11.856683    1196 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/061506d6-ef07-4852-b9f4-9c28e30da0be-config-volume\") pod \"061506d6-ef07-4852-b9f4-9c28e30da0be\" (UID: \"061506d6-ef07-4852-b9f4-9c28e30da0be\") "
	Oct 07 11:49:11 addons-246818 kubelet[1196]: I1007 11:49:11.857178    1196 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061506d6-ef07-4852-b9f4-9c28e30da0be-config-volume" (OuterVolumeSpecName: "config-volume") pod "061506d6-ef07-4852-b9f4-9c28e30da0be" (UID: "061506d6-ef07-4852-b9f4-9c28e30da0be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 07 11:49:11 addons-246818 kubelet[1196]: I1007 11:49:11.865541    1196 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061506d6-ef07-4852-b9f4-9c28e30da0be-kube-api-access-qgvtr" (OuterVolumeSpecName: "kube-api-access-qgvtr") pod "061506d6-ef07-4852-b9f4-9c28e30da0be" (UID: "061506d6-ef07-4852-b9f4-9c28e30da0be"). InnerVolumeSpecName "kube-api-access-qgvtr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 07 11:49:11 addons-246818 kubelet[1196]: I1007 11:49:11.911338    1196 scope.go:117] "RemoveContainer" containerID="1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e"
	Oct 07 11:49:11 addons-246818 kubelet[1196]: I1007 11:49:11.951519    1196 scope.go:117] "RemoveContainer" containerID="1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e"
	Oct 07 11:49:11 addons-246818 kubelet[1196]: E1007 11:49:11.952219    1196 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e\": container with ID starting with 1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e not found: ID does not exist" containerID="1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e"
	Oct 07 11:49:11 addons-246818 kubelet[1196]: I1007 11:49:11.952252    1196 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e"} err="failed to get container status \"1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e\": rpc error: code = NotFound desc = could not find container \"1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e\": container with ID starting with 1944cdab752531bca2668742b47756923086fae85c83d3ecad01a347a2a76a6e not found: ID does not exist"
	Oct 07 11:49:11 addons-246818 kubelet[1196]: I1007 11:49:11.957701    1196 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/061506d6-ef07-4852-b9f4-9c28e30da0be-config-volume\") on node \"addons-246818\" DevicePath \"\""
	Oct 07 11:49:11 addons-246818 kubelet[1196]: I1007 11:49:11.957751    1196 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qgvtr\" (UniqueName: \"kubernetes.io/projected/061506d6-ef07-4852-b9f4-9c28e30da0be-kube-api-access-qgvtr\") on node \"addons-246818\" DevicePath \"\""
	Oct 07 11:49:12 addons-246818 kubelet[1196]: I1007 11:49:12.529098    1196 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="061506d6-ef07-4852-b9f4-9c28e30da0be" path="/var/lib/kubelet/pods/061506d6-ef07-4852-b9f4-9c28e30da0be/volumes"
	Oct 07 11:49:13 addons-246818 kubelet[1196]: E1007 11:49:13.526709    1196 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7dd2a563-8ddd-4a27-b356-1d2368c56e79"
	Oct 07 11:49:16 addons-246818 kubelet[1196]: E1007 11:49:16.556187    1196 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 11:49:16 addons-246818 kubelet[1196]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 11:49:16 addons-246818 kubelet[1196]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 11:49:16 addons-246818 kubelet[1196]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 11:49:16 addons-246818 kubelet[1196]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 11:49:16 addons-246818 kubelet[1196]: E1007 11:49:16.963312    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301756962889466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:49:16 addons-246818 kubelet[1196]: E1007 11:49:16.963354    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301756962889466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:49:26 addons-246818 kubelet[1196]: E1007 11:49:26.966903    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301766966164405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:49:26 addons-246818 kubelet[1196]: E1007 11:49:26.967238    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301766966164405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:49:27 addons-246818 kubelet[1196]: E1007 11:49:27.526624    1196 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7dd2a563-8ddd-4a27-b356-1d2368c56e79"
	Oct 07 11:49:36 addons-246818 kubelet[1196]: E1007 11:49:36.970113    1196 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301776969594877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:49:36 addons-246818 kubelet[1196]: E1007 11:49:36.970205    1196 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728301776969594877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:517063,},InodesUsed:&UInt64Value{Value:182,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [64b3fe56b0b4d3fe117735565f2a0aeab451e5355bb33873142df1501d850d77] <==
	I1007 11:32:29.154950       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 11:32:29.177899       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 11:32:29.177961       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 11:32:29.210127       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 11:32:29.210330       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-246818_c20493a4-b4c1-4d82-aa60-bc8f32f150cc!
	I1007 11:32:29.211374       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd5fb25e-787a-4fbd-bcb7-131f507b7555", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-246818_c20493a4-b4c1-4d82-aa60-bc8f32f150cc became leader
	I1007 11:32:29.318137       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-246818_c20493a4-b4c1-4d82-aa60-bc8f32f150cc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-246818 -n addons-246818
helpers_test.go:261: (dbg) Run:  kubectl --context addons-246818 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-246818 describe pod hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-246818 describe pod hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6: exit status 1 (87.382486ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-69v2g
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-246818/192.168.39.141
	Start Time:       Mon, 07 Oct 2024 11:45:51 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:           10.244.0.28
	Controlled By:  ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-khkjd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-khkjd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m48s                default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-69v2g to addons-246818
	  Warning  Failed     71s (x2 over 2m43s)  kubelet            Failed to pull image "docker.io/kicbase/echo-server:1.0": reading manifest 1.0 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     71s (x2 over 2m43s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    56s (x2 over 2m42s)  kubelet            Back-off pulling image "docker.io/kicbase/echo-server:1.0"
	  Warning  Failed     56s (x2 over 2m42s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    45s (x3 over 3m47s)  kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-246818/192.168.39.141
	Start Time:       Mon, 07 Oct 2024 11:43:36 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fs7ff (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-fs7ff:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m3s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-246818
	  Warning  Failed     5m32s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m22s (x4 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     40s (x4 over 5m32s)   kubelet            Error: ErrImagePull
	  Warning  Failed     40s (x3 over 4m30s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    12s (x7 over 5m31s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     12s (x7 over 5m31s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42qhr (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-42qhr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-246818 describe pod hello-world-app-55bf9c44b4-69v2g task-pv-pod test-local-path helper-pod-create-pvc-1076f3dc-35f6-412b-9100-ae09cd9e50b6: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-246818 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.916766913s)
--- FAIL: TestAddons/parallel/CSI (387.85s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (386.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:883: (dbg) Run:  kubectl --context addons-246818 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:889: (dbg) Run:  kubectl --context addons-246818 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:893: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-246818 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.504µs)
helpers_test.go:396: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:894: failed waiting for PVC test-pvc: context deadline exceeded
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-246818 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m26.133284999s)
--- FAIL: TestAddons/parallel/LocalPath (386.36s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-246818
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-246818: exit status 82 (2m0.490951758s)

                                                
                                                
-- stdout --
	* Stopping node "addons-246818"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-246818" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-246818
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-246818: exit status 11 (21.69837397s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.141:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-246818" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-246818
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-246818: exit status 11 (6.141780076s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.141:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-246818" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-246818
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-246818: exit status 11 (6.144313815s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.141:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-246818" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.48s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (190.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5d337753-0806-4e51-8df7-1d6a0ef08ac6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004627146s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-790363 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-790363 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-790363 get pvc myclaim -o=json
I1007 11:56:48.990803  384271 retry.go:31] will retry after 1.744380317s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:8e361877-6451-471a-9a14-0c0a728fdf66 ResourceVersion:731 Generation:0 CreationTimestamp:2024-10-07 11:56:48 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc002132130 VolumeMode:0xc002132140 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-790363 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-790363 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [25367271-f008-4500-8e41-4f290db932a2] Pending
helpers_test.go:344: "sp-pod" [25367271-f008-4500-8e41-4f290db932a2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1007 11:57:45.244913  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-790363 -n functional-790363
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-10-07 11:59:51.18900045 +0000 UTC m=+1709.807584477
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-790363 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-790363 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-790363/192.168.39.166
Start Time:       Mon, 07 Oct 2024 11:56:50 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qhzl4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-qhzl4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age               From               Message
----     ------     ----              ----               -------
Normal   Scheduled  3m                default-scheduler  Successfully assigned default/sp-pod to functional-790363
Warning  Failed     87s               kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     87s               kubelet            Error: ErrImagePull
Normal   BackOff    87s               kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     87s               kubelet            Error: ImagePullBackOff
Normal   Pulling    75s (x2 over 3m)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-790363 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-790363 logs sp-pod -n default: exit status 1 (64.620008ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-790363 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-790363 -n functional-790363
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-790363 logs -n 25: (1.784067316s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:58 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                 |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | -T /mount-9p | grep 9p                                                 |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh -- ls                                            | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | -la /mount-9p                                                          |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh sudo                                             | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | umount -f /mount-9p                                                    |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | -T /mount1                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-790363                                                   | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-790363                                                   | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-790363                                                   | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | -T /mount1                                                             |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | -T /mount2                                                             |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | -T /mount3                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-790363                                                   | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | --kill=true                                                            |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh sudo                                             | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | systemctl is-active docker                                             |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh sudo                                             | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | systemctl is-active containerd                                         |                   |         |         |                     |                     |
	| license        |                                                                        | minikube          | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	| image          | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | image ls --format short                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | image ls --format yaml                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh pgrep                                            | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | buildkitd                                                              |                   |         |         |                     |                     |
	| image          | functional-790363 image build -t                                       | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | localhost/my-image:functional-790363                                   |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                   |         |         |                     |                     |
	| image          | functional-790363 image ls                                             | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	| image          | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | image ls --format json                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | image ls --format table                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| update-context | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:58:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:58:04.593359  398541 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:58:04.593500  398541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:58:04.593511  398541 out.go:358] Setting ErrFile to fd 2...
	I1007 11:58:04.593517  398541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:58:04.593725  398541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 11:58:04.594283  398541 out.go:352] Setting JSON to false
	I1007 11:58:04.595347  398541 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6031,"bootTime":1728296254,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:58:04.595465  398541 start.go:139] virtualization: kvm guest
	I1007 11:58:04.597752  398541 out.go:177] * [functional-790363] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:58:04.599094  398541 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 11:58:04.599150  398541 notify.go:220] Checking for updates...
	I1007 11:58:04.601722  398541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:58:04.603124  398541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:58:04.604480  398541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:58:04.605664  398541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:58:04.606987  398541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:58:04.608903  398541 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:58:04.609517  398541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:58:04.609613  398541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:58:04.625104  398541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I1007 11:58:04.625527  398541 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:58:04.626120  398541 main.go:141] libmachine: Using API Version  1
	I1007 11:58:04.626140  398541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:58:04.626512  398541 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:58:04.626689  398541 main.go:141] libmachine: (functional-790363) Calling .DriverName
	I1007 11:58:04.626923  398541 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:58:04.627246  398541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:58:04.627283  398541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:58:04.642432  398541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I1007 11:58:04.642909  398541 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:58:04.643423  398541 main.go:141] libmachine: Using API Version  1
	I1007 11:58:04.643449  398541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:58:04.643785  398541 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:58:04.643980  398541 main.go:141] libmachine: (functional-790363) Calling .DriverName
	I1007 11:58:04.678306  398541 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 11:58:04.679688  398541 start.go:297] selected driver: kvm2
	I1007 11:58:04.679703  398541 start.go:901] validating driver "kvm2" against &{Name:functional-790363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-790363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:58:04.679811  398541 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:58:04.680770  398541 cni.go:84] Creating CNI manager for ""
	I1007 11:58:04.680826  398541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:58:04.680876  398541 start.go:340] cluster config:
	{Name:functional-790363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-790363 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:58:04.682553  398541 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.128648996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302392128610021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3dedf43-78bf-4e85-bd47-e01b90f8d63d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.130601181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a786b4ab-95a1-44cb-b0a0-1a84ad8ac1a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.130703641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a786b4ab-95a1-44cb-b0a0-1a84ad8ac1a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.131499681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89f8fb2e372535ca6f1f7185e5511386ab5e1bab5f70ad0d52c06519e6aadb63,PodSandboxId:1e9cdb2c412a06df333e1ab43a58851cd733c94afcc9407933f6ed141776b9ef,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1728302343825596773,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-srlwz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d36cc72a-f7ca-4393-b599-82d95feaaa06,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8b49ad030cff2a9431a545949d8ceef79a7eea304fd994aa22952524c0302a,PodSandboxId:60e4f8e51947e63a3eec7ebd3a92f11d6d82d785d491c3d70b111059aa1d438d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1728302341634466255,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-5x8f4,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5d581836-45b4-4fe4-bf4a-1a99bbb5c5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d6481ff3fb2759693be28c18e54fa70a286b4079665dc7b31db6a006499650,PodSandboxId:a620634aa4478dd5e98f8e4f28854798a7df809f998b0946e94dde0b1ee0520d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1728302335940065096,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4ea3387-0a8d-43b1-8ed0-a5caf15f672b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:772145244fb5b4ad69814782e6f61a663b885d904999512e589a1973e87c7ddb,PodSandboxId:5884a4903268b35ce628b627c63612151a18a20f86652a13e3a1e8486c2ce577,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302275663264088,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-nnv6b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa888d7c-ba75-4424-b0f9-0b53ef6e15d2,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9cdda99c8ad24dc25e8fa6479550da461cb07d2333b51693cee96208f660919,PodSandboxId:93b684576603d362a3319e2204151f37a933976c06f4c4feb2f1054e6a889299,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302273260854167,Labels:map[string]string{io.kubernetes.container.na
me: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-rzmtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd3c63a-e3a9-48e4-b35c-6eeb69b38295,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da497bf8743c332e5a6a9396e479580d1128396bc850f7d152b128d045b0142,PodSandboxId:f0d8d612a1ec8424515c96165c2e98153ccc57c0adbd899ad439f28c5800c251,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302178907479114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52d08247d6ff1b34e5e4b288f37fc9a19c77ab57506233ff65db5cbed9a5f3e,PodSandboxId:fd4b0b771c475c86aa1dea63c83c954d5dc6e0b4fdd025135f4543b142fbe94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302178919871816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb598880eecba2c5c8c2f41f8884e222d259ef0d90575dc6ec4d33499f7460af,PodSandboxId:4ce7a93b11806115eba21a52f2993355e8b9c7e288de221d63bcc82283a5ff64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302178899137904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d00430bfbcf707370f01b4a7f3ba74a3c2170a4dcd447a14fb6f30290cbdf4,PodSandboxId:9df509c87de947d3243dc27626ec5d38c1ffdde60cd2e55c75cae7ed14a1a3a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af275
7a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302175275035728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4e1143ad4f52e45d83a37fce32014,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4874f237059e817484ace6d656787502853e81d7141e1d514cd9aa5df71f60,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302175113240054,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d3c1943121b8063348f009d186760a53a3335eef313487c4478127e65ea93d,PodSandboxId:c41bc7ac169396111a662bb988b76a5feb80d693f7b939f04b5255968e0cd433,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt
:1728302175065152683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0197a3aeefefa450f9b715eeb4b74aaaaa77717c18abf83037f26abb62501eb,PodSandboxId:8fb748069241f9a61920b1ab214e10d285146b6f27901b703ff16f72419410bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172
8302175083894134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef19bca5cdcf4cd5760d287d58f51774c733dd7de17033fb8baea58d8d5fcf,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:172830217
2027993068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd7587c08707642845da1f9da47461e4c097627d532cad22ae7387d3ba9fe03,PodSandboxId:9fdf58736a072898020a400c69917a938b2b456503eb4cfcaa5ad3e703ffcb08,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728302137385436849,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f8cf9e9c8abae44283271b2927935dd8fdc1406459daeecc22a0c82109d857,PodSandboxId:6a5700facfbfc0947e103d7114bdc5e721e8b8c0655b7a9a22c1c8e7dbec6443,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728302137369099169,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e6d68f4398bdebd16c2654551dbce1a0754fbedd91ba4e1e9a9ed7cb57458,PodSandboxId:f498dfe1c9da7c83965d715c9ee1cce6204872f686af710abddcb6d4ca7c83b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728302137327928762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5499881b9a7180c26488452535c7e046dc20fd0f9f731e391ac7068a6c6f0d39,PodSandboxId:79a71698b535f3f4cfead8219422c4d9db0f18141c6ef8af7e29119705cae29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728302133579969355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4178a6c4038f98ff6558bf467847c0d8e1cb20f19bc3d89db4eb328db6566c81,PodSandboxId:e3d24e3aa199e18f641e477dc448159ac5e82cb14d0735d2ff93801e20362623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302133558770973,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecd2fc86fc14f9839ea531d8a496b2b394f3f9e3089fd73f67210bd3a4c3b5e,PodSandboxId:25e7c2c97e166c9edabfb1fbaeca324159eb4c1806df26386356086bd2762fd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728302133547616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82665f23d6e5401737fad860192067,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a786b4ab-95a1-44cb-b0a0-1a84ad8ac1a5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.185603404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=002b47cd-79e6-401e-b5b1-3321020ae6a0 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.185684926Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=002b47cd-79e6-401e-b5b1-3321020ae6a0 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.186985304Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a2f2db9-27a5-44a4-b167-0ebfcd06c409 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.187818062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302392187787758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a2f2db9-27a5-44a4-b167-0ebfcd06c409 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.188536854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20c73843-75b9-4529-9ee5-9045a939db38 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.188598743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20c73843-75b9-4529-9ee5-9045a939db38 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.189021685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89f8fb2e372535ca6f1f7185e5511386ab5e1bab5f70ad0d52c06519e6aadb63,PodSandboxId:1e9cdb2c412a06df333e1ab43a58851cd733c94afcc9407933f6ed141776b9ef,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1728302343825596773,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-srlwz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d36cc72a-f7ca-4393-b599-82d95feaaa06,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8b49ad030cff2a9431a545949d8ceef79a7eea304fd994aa22952524c0302a,PodSandboxId:60e4f8e51947e63a3eec7ebd3a92f11d6d82d785d491c3d70b111059aa1d438d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1728302341634466255,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-5x8f4,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5d581836-45b4-4fe4-bf4a-1a99bbb5c5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d6481ff3fb2759693be28c18e54fa70a286b4079665dc7b31db6a006499650,PodSandboxId:a620634aa4478dd5e98f8e4f28854798a7df809f998b0946e94dde0b1ee0520d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1728302335940065096,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4ea3387-0a8d-43b1-8ed0-a5caf15f672b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:772145244fb5b4ad69814782e6f61a663b885d904999512e589a1973e87c7ddb,PodSandboxId:5884a4903268b35ce628b627c63612151a18a20f86652a13e3a1e8486c2ce577,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302275663264088,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-nnv6b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa888d7c-ba75-4424-b0f9-0b53ef6e15d2,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9cdda99c8ad24dc25e8fa6479550da461cb07d2333b51693cee96208f660919,PodSandboxId:93b684576603d362a3319e2204151f37a933976c06f4c4feb2f1054e6a889299,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302273260854167,Labels:map[string]string{io.kubernetes.container.na
me: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-rzmtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd3c63a-e3a9-48e4-b35c-6eeb69b38295,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da497bf8743c332e5a6a9396e479580d1128396bc850f7d152b128d045b0142,PodSandboxId:f0d8d612a1ec8424515c96165c2e98153ccc57c0adbd899ad439f28c5800c251,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302178907479114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52d08247d6ff1b34e5e4b288f37fc9a19c77ab57506233ff65db5cbed9a5f3e,PodSandboxId:fd4b0b771c475c86aa1dea63c83c954d5dc6e0b4fdd025135f4543b142fbe94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302178919871816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb598880eecba2c5c8c2f41f8884e222d259ef0d90575dc6ec4d33499f7460af,PodSandboxId:4ce7a93b11806115eba21a52f2993355e8b9c7e288de221d63bcc82283a5ff64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302178899137904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d00430bfbcf707370f01b4a7f3ba74a3c2170a4dcd447a14fb6f30290cbdf4,PodSandboxId:9df509c87de947d3243dc27626ec5d38c1ffdde60cd2e55c75cae7ed14a1a3a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af275
7a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302175275035728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4e1143ad4f52e45d83a37fce32014,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4874f237059e817484ace6d656787502853e81d7141e1d514cd9aa5df71f60,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302175113240054,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d3c1943121b8063348f009d186760a53a3335eef313487c4478127e65ea93d,PodSandboxId:c41bc7ac169396111a662bb988b76a5feb80d693f7b939f04b5255968e0cd433,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt
:1728302175065152683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0197a3aeefefa450f9b715eeb4b74aaaaa77717c18abf83037f26abb62501eb,PodSandboxId:8fb748069241f9a61920b1ab214e10d285146b6f27901b703ff16f72419410bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172
8302175083894134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef19bca5cdcf4cd5760d287d58f51774c733dd7de17033fb8baea58d8d5fcf,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:172830217
2027993068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd7587c08707642845da1f9da47461e4c097627d532cad22ae7387d3ba9fe03,PodSandboxId:9fdf58736a072898020a400c69917a938b2b456503eb4cfcaa5ad3e703ffcb08,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728302137385436849,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f8cf9e9c8abae44283271b2927935dd8fdc1406459daeecc22a0c82109d857,PodSandboxId:6a5700facfbfc0947e103d7114bdc5e721e8b8c0655b7a9a22c1c8e7dbec6443,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728302137369099169,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e6d68f4398bdebd16c2654551dbce1a0754fbedd91ba4e1e9a9ed7cb57458,PodSandboxId:f498dfe1c9da7c83965d715c9ee1cce6204872f686af710abddcb6d4ca7c83b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728302137327928762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5499881b9a7180c26488452535c7e046dc20fd0f9f731e391ac7068a6c6f0d39,PodSandboxId:79a71698b535f3f4cfead8219422c4d9db0f18141c6ef8af7e29119705cae29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728302133579969355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4178a6c4038f98ff6558bf467847c0d8e1cb20f19bc3d89db4eb328db6566c81,PodSandboxId:e3d24e3aa199e18f641e477dc448159ac5e82cb14d0735d2ff93801e20362623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302133558770973,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecd2fc86fc14f9839ea531d8a496b2b394f3f9e3089fd73f67210bd3a4c3b5e,PodSandboxId:25e7c2c97e166c9edabfb1fbaeca324159eb4c1806df26386356086bd2762fd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728302133547616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82665f23d6e5401737fad860192067,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20c73843-75b9-4529-9ee5-9045a939db38 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.226336967Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=266c5745-cf70-4be4-8d80-b42a37da8739 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.226417755Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=266c5745-cf70-4be4-8d80-b42a37da8739 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.227626709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79798ca2-900c-48b2-b2e5-dab3adec7179 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.228392967Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302392228365546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79798ca2-900c-48b2-b2e5-dab3adec7179 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.229351189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c02773de-3dd6-4e2d-ab71-b6069040a7eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.229408548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c02773de-3dd6-4e2d-ab71-b6069040a7eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.229784469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89f8fb2e372535ca6f1f7185e5511386ab5e1bab5f70ad0d52c06519e6aadb63,PodSandboxId:1e9cdb2c412a06df333e1ab43a58851cd733c94afcc9407933f6ed141776b9ef,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1728302343825596773,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-srlwz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d36cc72a-f7ca-4393-b599-82d95feaaa06,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8b49ad030cff2a9431a545949d8ceef79a7eea304fd994aa22952524c0302a,PodSandboxId:60e4f8e51947e63a3eec7ebd3a92f11d6d82d785d491c3d70b111059aa1d438d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1728302341634466255,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-5x8f4,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5d581836-45b4-4fe4-bf4a-1a99bbb5c5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d6481ff3fb2759693be28c18e54fa70a286b4079665dc7b31db6a006499650,PodSandboxId:a620634aa4478dd5e98f8e4f28854798a7df809f998b0946e94dde0b1ee0520d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1728302335940065096,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4ea3387-0a8d-43b1-8ed0-a5caf15f672b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:772145244fb5b4ad69814782e6f61a663b885d904999512e589a1973e87c7ddb,PodSandboxId:5884a4903268b35ce628b627c63612151a18a20f86652a13e3a1e8486c2ce577,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302275663264088,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-nnv6b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa888d7c-ba75-4424-b0f9-0b53ef6e15d2,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9cdda99c8ad24dc25e8fa6479550da461cb07d2333b51693cee96208f660919,PodSandboxId:93b684576603d362a3319e2204151f37a933976c06f4c4feb2f1054e6a889299,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302273260854167,Labels:map[string]string{io.kubernetes.container.na
me: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-rzmtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd3c63a-e3a9-48e4-b35c-6eeb69b38295,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da497bf8743c332e5a6a9396e479580d1128396bc850f7d152b128d045b0142,PodSandboxId:f0d8d612a1ec8424515c96165c2e98153ccc57c0adbd899ad439f28c5800c251,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302178907479114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52d08247d6ff1b34e5e4b288f37fc9a19c77ab57506233ff65db5cbed9a5f3e,PodSandboxId:fd4b0b771c475c86aa1dea63c83c954d5dc6e0b4fdd025135f4543b142fbe94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302178919871816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb598880eecba2c5c8c2f41f8884e222d259ef0d90575dc6ec4d33499f7460af,PodSandboxId:4ce7a93b11806115eba21a52f2993355e8b9c7e288de221d63bcc82283a5ff64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302178899137904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d00430bfbcf707370f01b4a7f3ba74a3c2170a4dcd447a14fb6f30290cbdf4,PodSandboxId:9df509c87de947d3243dc27626ec5d38c1ffdde60cd2e55c75cae7ed14a1a3a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af275
7a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302175275035728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4e1143ad4f52e45d83a37fce32014,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4874f237059e817484ace6d656787502853e81d7141e1d514cd9aa5df71f60,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302175113240054,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d3c1943121b8063348f009d186760a53a3335eef313487c4478127e65ea93d,PodSandboxId:c41bc7ac169396111a662bb988b76a5feb80d693f7b939f04b5255968e0cd433,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt
:1728302175065152683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0197a3aeefefa450f9b715eeb4b74aaaaa77717c18abf83037f26abb62501eb,PodSandboxId:8fb748069241f9a61920b1ab214e10d285146b6f27901b703ff16f72419410bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172
8302175083894134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef19bca5cdcf4cd5760d287d58f51774c733dd7de17033fb8baea58d8d5fcf,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:172830217
2027993068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd7587c08707642845da1f9da47461e4c097627d532cad22ae7387d3ba9fe03,PodSandboxId:9fdf58736a072898020a400c69917a938b2b456503eb4cfcaa5ad3e703ffcb08,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728302137385436849,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f8cf9e9c8abae44283271b2927935dd8fdc1406459daeecc22a0c82109d857,PodSandboxId:6a5700facfbfc0947e103d7114bdc5e721e8b8c0655b7a9a22c1c8e7dbec6443,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728302137369099169,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e6d68f4398bdebd16c2654551dbce1a0754fbedd91ba4e1e9a9ed7cb57458,PodSandboxId:f498dfe1c9da7c83965d715c9ee1cce6204872f686af710abddcb6d4ca7c83b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728302137327928762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5499881b9a7180c26488452535c7e046dc20fd0f9f731e391ac7068a6c6f0d39,PodSandboxId:79a71698b535f3f4cfead8219422c4d9db0f18141c6ef8af7e29119705cae29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728302133579969355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4178a6c4038f98ff6558bf467847c0d8e1cb20f19bc3d89db4eb328db6566c81,PodSandboxId:e3d24e3aa199e18f641e477dc448159ac5e82cb14d0735d2ff93801e20362623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302133558770973,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecd2fc86fc14f9839ea531d8a496b2b394f3f9e3089fd73f67210bd3a4c3b5e,PodSandboxId:25e7c2c97e166c9edabfb1fbaeca324159eb4c1806df26386356086bd2762fd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728302133547616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82665f23d6e5401737fad860192067,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c02773de-3dd6-4e2d-ab71-b6069040a7eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.280033544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1224a25-97e9-4059-b4ac-d12b84dbc382 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.280130223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1224a25-97e9-4059-b4ac-d12b84dbc382 name=/runtime.v1.RuntimeService/Version
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.281837188Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c07ee192-6c95-42c3-a828-a4db256c1c68 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.282599901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302392282568814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c07ee192-6c95-42c3-a828-a4db256c1c68 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.283427244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98a6c2a2-83ac-436a-b19a-43eaed8309ad name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.283486839Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98a6c2a2-83ac-436a-b19a-43eaed8309ad name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 11:59:52 functional-790363 crio[4846]: time="2024-10-07 11:59:52.283849199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89f8fb2e372535ca6f1f7185e5511386ab5e1bab5f70ad0d52c06519e6aadb63,PodSandboxId:1e9cdb2c412a06df333e1ab43a58851cd733c94afcc9407933f6ed141776b9ef,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1728302343825596773,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-srlwz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d36cc72a-f7ca-4393-b599-82d95feaaa06,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8b49ad030cff2a9431a545949d8ceef79a7eea304fd994aa22952524c0302a,PodSandboxId:60e4f8e51947e63a3eec7ebd3a92f11d6d82d785d491c3d70b111059aa1d438d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1728302341634466255,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-5x8f4,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5d581836-45b4-4fe4-bf4a-1a99bbb5c5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d6481ff3fb2759693be28c18e54fa70a286b4079665dc7b31db6a006499650,PodSandboxId:a620634aa4478dd5e98f8e4f28854798a7df809f998b0946e94dde0b1ee0520d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1728302335940065096,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4ea3387-0a8d-43b1-8ed0-a5caf15f672b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:772145244fb5b4ad69814782e6f61a663b885d904999512e589a1973e87c7ddb,PodSandboxId:5884a4903268b35ce628b627c63612151a18a20f86652a13e3a1e8486c2ce577,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302275663264088,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-nnv6b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa888d7c-ba75-4424-b0f9-0b53ef6e15d2,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9cdda99c8ad24dc25e8fa6479550da461cb07d2333b51693cee96208f660919,PodSandboxId:93b684576603d362a3319e2204151f37a933976c06f4c4feb2f1054e6a889299,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302273260854167,Labels:map[string]string{io.kubernetes.container.na
me: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-rzmtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd3c63a-e3a9-48e4-b35c-6eeb69b38295,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da497bf8743c332e5a6a9396e479580d1128396bc850f7d152b128d045b0142,PodSandboxId:f0d8d612a1ec8424515c96165c2e98153ccc57c0adbd899ad439f28c5800c251,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302178907479114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52d08247d6ff1b34e5e4b288f37fc9a19c77ab57506233ff65db5cbed9a5f3e,PodSandboxId:fd4b0b771c475c86aa1dea63c83c954d5dc6e0b4fdd025135f4543b142fbe94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302178919871816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb598880eecba2c5c8c2f41f8884e222d259ef0d90575dc6ec4d33499f7460af,PodSandboxId:4ce7a93b11806115eba21a52f2993355e8b9c7e288de221d63bcc82283a5ff64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302178899137904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d00430bfbcf707370f01b4a7f3ba74a3c2170a4dcd447a14fb6f30290cbdf4,PodSandboxId:9df509c87de947d3243dc27626ec5d38c1ffdde60cd2e55c75cae7ed14a1a3a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af275
7a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302175275035728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4e1143ad4f52e45d83a37fce32014,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4874f237059e817484ace6d656787502853e81d7141e1d514cd9aa5df71f60,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302175113240054,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d3c1943121b8063348f009d186760a53a3335eef313487c4478127e65ea93d,PodSandboxId:c41bc7ac169396111a662bb988b76a5feb80d693f7b939f04b5255968e0cd433,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt
:1728302175065152683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0197a3aeefefa450f9b715eeb4b74aaaaa77717c18abf83037f26abb62501eb,PodSandboxId:8fb748069241f9a61920b1ab214e10d285146b6f27901b703ff16f72419410bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172
8302175083894134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef19bca5cdcf4cd5760d287d58f51774c733dd7de17033fb8baea58d8d5fcf,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:172830217
2027993068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd7587c08707642845da1f9da47461e4c097627d532cad22ae7387d3ba9fe03,PodSandboxId:9fdf58736a072898020a400c69917a938b2b456503eb4cfcaa5ad3e703ffcb08,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728302137385436849,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f8cf9e9c8abae44283271b2927935dd8fdc1406459daeecc22a0c82109d857,PodSandboxId:6a5700facfbfc0947e103d7114bdc5e721e8b8c0655b7a9a22c1c8e7dbec6443,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728302137369099169,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e6d68f4398bdebd16c2654551dbce1a0754fbedd91ba4e1e9a9ed7cb57458,PodSandboxId:f498dfe1c9da7c83965d715c9ee1cce6204872f686af710abddcb6d4ca7c83b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728302137327928762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5499881b9a7180c26488452535c7e046dc20fd0f9f731e391ac7068a6c6f0d39,PodSandboxId:79a71698b535f3f4cfead8219422c4d9db0f18141c6ef8af7e29119705cae29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728302133579969355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4178a6c4038f98ff6558bf467847c0d8e1cb20f19bc3d89db4eb328db6566c81,PodSandboxId:e3d24e3aa199e18f641e477dc448159ac5e82cb14d0735d2ff93801e20362623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302133558770973,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecd2fc86fc14f9839ea531d8a496b2b394f3f9e3089fd73f67210bd3a4c3b5e,PodSandboxId:25e7c2c97e166c9edabfb1fbaeca324159eb4c1806df26386356086bd2762fd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728302133547616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82665f23d6e5401737fad860192067,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98a6c2a2-83ac-436a-b19a-43eaed8309ad name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	89f8fb2e37253       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   48 seconds ago       Running             dashboard-metrics-scraper   0                   1e9cdb2c412a0       dashboard-metrics-scraper-c5db448b4-srlwz
	4b8b49ad030cf       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         50 seconds ago       Running             kubernetes-dashboard        0                   60e4f8e51947e       kubernetes-dashboard-695b96c756-5x8f4
	37d6481ff3fb2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              56 seconds ago       Exited              mount-munger                0                   a620634aa4478       busybox-mount
	772145244fb5b       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 About a minute ago   Running             echoserver                  0                   5884a4903268b       hello-node-connect-67bdd5bbb4-nnv6b
	e9cdda99c8ad2       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               About a minute ago   Running             echoserver                  0                   93b684576603d       hello-node-6b9f76b5c7-rzmtr
	b52d08247d6ff       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago        Running             coredns                     3                   fd4b0b771c475       coredns-7c65d6cfc9-2cmgd
	2da497bf8743c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 3 minutes ago        Running             kube-proxy                  3                   f0d8d612a1ec8       kube-proxy-tg2xd
	fb598880eecba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Running             storage-provisioner         3                   4ce7a93b11806       storage-provisioner
	a0d00430bfbcf       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 3 minutes ago        Running             kube-apiserver              0                   9df509c87de94       kube-apiserver-functional-790363
	ae4874f237059       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 3 minutes ago        Running             etcd                        4                   4adb904f233af       etcd-functional-790363
	d0197a3aeefef       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 3 minutes ago        Running             kube-controller-manager     3                   8fb748069241f       kube-controller-manager-functional-790363
	f8d3c1943121b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 3 minutes ago        Running             kube-scheduler              3                   c41bc7ac16939       kube-scheduler-functional-790363
	c9ef19bca5cdc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 3 minutes ago        Exited              etcd                        3                   4adb904f233af       etcd-functional-790363
	fcd7587c08707       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago        Exited              coredns                     2                   9fdf58736a072       coredns-7c65d6cfc9-2cmgd
	19f8cf9e9c8ab       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 4 minutes ago        Exited              kube-proxy                  2                   6a5700facfbfc       kube-proxy-tg2xd
	123e6d68f4398       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago        Exited              storage-provisioner         2                   f498dfe1c9da7       storage-provisioner
	5499881b9a718       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 4 minutes ago        Exited              kube-scheduler              2                   79a71698b535f       kube-scheduler-functional-790363
	4178a6c4038f9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 4 minutes ago        Exited              kube-controller-manager     2                   e3d24e3aa199e       kube-controller-manager-functional-790363
	8ecd2fc86fc14       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 4 minutes ago        Exited              kube-apiserver              2                   25e7c2c97e166       kube-apiserver-functional-790363
	
	
	==> coredns [b52d08247d6ff1b34e5e4b288f37fc9a19c77ab57506233ff65db5cbed9a5f3e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49893 - 19346 "HINFO IN 5999558149317467541.5755056667726841505. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029745842s
	
	
	==> coredns [fcd7587c08707642845da1f9da47461e4c097627d532cad22ae7387d3ba9fe03] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35672 - 53728 "HINFO IN 2885700761022076205.1762642202912000522. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030356774s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-790363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-790363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=functional-790363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T11_54_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 11:54:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-790363
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 11:59:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 11:59:21 +0000   Mon, 07 Oct 2024 11:54:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 11:59:21 +0000   Mon, 07 Oct 2024 11:54:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 11:59:21 +0000   Mon, 07 Oct 2024 11:54:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 11:59:21 +0000   Mon, 07 Oct 2024 11:54:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.166
	  Hostname:    functional-790363
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 930e7722ab134e61a0cfaa7c4b722ea5
	  System UUID:                930e7722-ab13-4e61-a0cf-aa7c4b722ea5
	  Boot ID:                    c12af0d0-5adc-4ca8-ac1c-3fdaa7c5a465
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-rzmtr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     hello-node-connect-67bdd5bbb4-nnv6b          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     mysql-6cdb49bbb-2hkb9                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    3m10s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-7c65d6cfc9-2cmgd                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m30s
	  kube-system                 etcd-functional-790363                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m36s
	  kube-system                 kube-apiserver-functional-790363             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 kube-controller-manager-functional-790363    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-proxy-tg2xd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-scheduler-functional-790363             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-srlwz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-5x8f4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m27s                  kube-proxy       
	  Normal  Starting                 3m33s                  kube-proxy       
	  Normal  Starting                 4m14s                  kube-proxy       
	  Normal  Starting                 5m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m41s (x8 over 5m41s)  kubelet          Node functional-790363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s (x8 over 5m41s)  kubelet          Node functional-790363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s (x7 over 5m41s)  kubelet          Node functional-790363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m34s                  kubelet          Node functional-790363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m34s                  kubelet          Node functional-790363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s                  kubelet          Node functional-790363 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m34s                  kubelet          Starting kubelet.
	  Normal  NodeReady                5m33s                  kubelet          Node functional-790363 status is now: NodeReady
	  Normal  RegisteredNode           5m31s                  node-controller  Node functional-790363 event: Registered Node functional-790363 in Controller
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node functional-790363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node functional-790363 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node functional-790363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node functional-790363 event: Registered Node functional-790363 in Controller
	  Normal  Starting                 3m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m38s (x8 over 3m38s)  kubelet          Node functional-790363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m38s (x8 over 3m38s)  kubelet          Node functional-790363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m38s (x7 over 3m38s)  kubelet          Node functional-790363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m31s                  node-controller  Node functional-790363 event: Registered Node functional-790363 in Controller
	
	
	==> dmesg <==
	[  +0.300678] systemd-fstab-generator[2403]: Ignoring "noauto" option for root device
	[  +0.724713] systemd-fstab-generator[2521]: Ignoring "noauto" option for root device
	[  +9.086142] kauditd_printk_skb: 207 callbacks suppressed
	[ +14.497657] systemd-fstab-generator[3470]: Ignoring "noauto" option for root device
	[  +4.614481] kauditd_printk_skb: 38 callbacks suppressed
	[ +14.717633] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +0.090469] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 7 11:56] systemd-fstab-generator[4770]: Ignoring "noauto" option for root device
	[  +0.076580] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.060404] systemd-fstab-generator[4782]: Ignoring "noauto" option for root device
	[  +0.177016] systemd-fstab-generator[4796]: Ignoring "noauto" option for root device
	[  +0.137591] systemd-fstab-generator[4808]: Ignoring "noauto" option for root device
	[  +0.305950] systemd-fstab-generator[4836]: Ignoring "noauto" option for root device
	[  +0.812144] systemd-fstab-generator[4957]: Ignoring "noauto" option for root device
	[  +3.003030] systemd-fstab-generator[5478]: Ignoring "noauto" option for root device
	[  +0.766213] kauditd_printk_skb: 206 callbacks suppressed
	[  +6.777564] kauditd_printk_skb: 33 callbacks suppressed
	[  +9.280115] systemd-fstab-generator[6001]: Ignoring "noauto" option for root device
	[  +6.710289] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.083940] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.371810] kauditd_printk_skb: 11 callbacks suppressed
	[Oct 7 11:57] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 7 11:58] kauditd_printk_skb: 4 callbacks suppressed
	[ +51.325062] kauditd_printk_skb: 32 callbacks suppressed
	[Oct 7 11:59] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [ae4874f237059e817484ace6d656787502853e81d7141e1d514cd9aa5df71f60] <==
	{"level":"info","ts":"2024-10-07T11:56:15.476365Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.166:2380"}
	{"level":"info","ts":"2024-10-07T11:56:15.476703Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.166:2380"}
	{"level":"info","ts":"2024-10-07T11:56:16.824736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-07T11:56:16.824860Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-07T11:56:16.824900Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c received MsgPreVoteResp from 21cab5ce19ce9e1c at term 3"}
	{"level":"info","ts":"2024-10-07T11:56:16.824938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c became candidate at term 4"}
	{"level":"info","ts":"2024-10-07T11:56:16.824963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c received MsgVoteResp from 21cab5ce19ce9e1c at term 4"}
	{"level":"info","ts":"2024-10-07T11:56:16.825002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c became leader at term 4"}
	{"level":"info","ts":"2024-10-07T11:56:16.825037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 21cab5ce19ce9e1c elected leader 21cab5ce19ce9e1c at term 4"}
	{"level":"info","ts":"2024-10-07T11:56:16.828020Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"21cab5ce19ce9e1c","local-member-attributes":"{Name:functional-790363 ClientURLs:[https://192.168.39.166:2379]}","request-path":"/0/members/21cab5ce19ce9e1c/attributes","cluster-id":"fb6a39d7926aa536","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T11:56:16.828272Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:56:16.829175Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:56:16.830515Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T11:56:16.841926Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:56:16.843744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T11:56:16.843811Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T11:56:16.844573Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:56:16.845470Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.166:2379"}
	{"level":"info","ts":"2024-10-07T11:58:23.361705Z","caller":"traceutil/trace.go:171","msg":"trace[2003554841] transaction","detail":"{read_only:false; response_revision:938; number_of_response:1; }","duration":"218.918982ms","start":"2024-10-07T11:58:23.142743Z","end":"2024-10-07T11:58:23.361662Z","steps":["trace[2003554841] 'process raft request'  (duration: 218.550521ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T11:59:01.405135Z","caller":"traceutil/trace.go:171","msg":"trace[1589393935] transaction","detail":"{read_only:false; response_revision:981; number_of_response:1; }","duration":"100.534052ms","start":"2024-10-07T11:59:01.304569Z","end":"2024-10-07T11:59:01.405103Z","steps":["trace[1589393935] 'process raft request'  (duration: 100.435851ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T11:59:35.021042Z","caller":"traceutil/trace.go:171","msg":"trace[395090206] linearizableReadLoop","detail":"{readStateIndex:1139; appliedIndex:1138; }","duration":"262.747133ms","start":"2024-10-07T11:59:34.758279Z","end":"2024-10-07T11:59:35.021026Z","steps":["trace[395090206] 'read index received'  (duration: 260.709741ms)","trace[395090206] 'applied index is now lower than readState.Index'  (duration: 2.03432ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T11:59:35.021305Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.947794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:59:35.021424Z","caller":"traceutil/trace.go:171","msg":"trace[877576994] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1032; }","duration":"263.114657ms","start":"2024-10-07T11:59:34.758272Z","end":"2024-10-07T11:59:35.021387Z","steps":["trace[877576994] 'agreement among raft nodes before linearized reading'  (duration: 262.876412ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:59:35.021478Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.039484ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:59:35.021517Z","caller":"traceutil/trace.go:171","msg":"trace[294892273] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1032; }","duration":"131.089958ms","start":"2024-10-07T11:59:34.890416Z","end":"2024-10-07T11:59:35.021506Z","steps":["trace[294892273] 'agreement among raft nodes before linearized reading'  (duration: 130.945159ms)"],"step_count":1}
	
	
	==> etcd [c9ef19bca5cdcf4cd5760d287d58f51774c733dd7de17033fb8baea58d8d5fcf] <==
	{"level":"info","ts":"2024-10-07T11:56:12.189884Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-10-07T11:56:12.196544Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"fb6a39d7926aa536","local-member-id":"21cab5ce19ce9e1c","commit-index":605}
	{"level":"info","ts":"2024-10-07T11:56:12.197330Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c switched to configuration voters=()"}
	{"level":"info","ts":"2024-10-07T11:56:12.197481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c became follower at term 3"}
	{"level":"info","ts":"2024-10-07T11:56:12.197680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 21cab5ce19ce9e1c [peers: [], term: 3, commit: 605, applied: 0, lastindex: 605, lastterm: 3]"}
	{"level":"warn","ts":"2024-10-07T11:56:12.202717Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-10-07T11:56:12.209830Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":562}
	{"level":"info","ts":"2024-10-07T11:56:12.215424Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-10-07T11:56:12.219734Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"21cab5ce19ce9e1c","timeout":"7s"}
	{"level":"info","ts":"2024-10-07T11:56:12.220059Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"21cab5ce19ce9e1c"}
	{"level":"info","ts":"2024-10-07T11:56:12.220095Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"21cab5ce19ce9e1c","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-07T11:56:12.220344Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-07T11:56:12.220520Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:56:12.220545Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:56:12.220553Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:56:12.220787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c switched to configuration voters=(2434958445348036124)"}
	{"level":"info","ts":"2024-10-07T11:56:12.220831Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fb6a39d7926aa536","local-member-id":"21cab5ce19ce9e1c","added-peer-id":"21cab5ce19ce9e1c","added-peer-peer-urls":["https://192.168.39.166:2380"]}
	{"level":"info","ts":"2024-10-07T11:56:12.220899Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fb6a39d7926aa536","local-member-id":"21cab5ce19ce9e1c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:56:12.220920Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:56:12.221443Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:56:12.224726Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-07T11:56:12.224908Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"21cab5ce19ce9e1c","initial-advertise-peer-urls":["https://192.168.39.166:2380"],"listen-peer-urls":["https://192.168.39.166:2380"],"advertise-client-urls":["https://192.168.39.166:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.166:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-07T11:56:12.224979Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T11:56:12.225074Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.166:2380"}
	{"level":"info","ts":"2024-10-07T11:56:12.225081Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.166:2380"}
	
	
	==> kernel <==
	 11:59:52 up 6 min,  0 users,  load average: 0.60, 0.45, 0.22
	Linux functional-790363 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8ecd2fc86fc14f9839ea531d8a496b2b394f3f9e3089fd73f67210bd3a4c3b5e] <==
	W1007 11:56:03.728038       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.728084       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.728121       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.728174       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729329       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729428       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729546       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729611       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729671       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729732       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729753       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729844       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729940       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730011       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730089       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730251       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730322       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730377       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730428       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730488       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730548       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730587       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730656       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730717       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730780       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a0d00430bfbcf707370f01b4a7f3ba74a3c2170a4dcd447a14fb6f30290cbdf4] <==
	I1007 11:56:18.255984       1 autoregister_controller.go:144] Starting autoregister controller
	I1007 11:56:18.255989       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1007 11:56:18.255994       1 cache.go:39] Caches are synced for autoregister controller
	I1007 11:56:18.257594       1 shared_informer.go:320] Caches are synced for configmaps
	E1007 11:56:18.269574       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1007 11:56:18.271284       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1007 11:56:18.277826       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1007 11:56:18.279311       1 policy_source.go:224] refreshing policies
	I1007 11:56:18.352650       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 11:56:19.149255       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1007 11:56:19.932919       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1007 11:56:19.946640       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1007 11:56:19.990522       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 11:56:20.024310       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1007 11:56:20.031327       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1007 11:56:21.810860       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 11:56:21.903960       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 11:56:37.891273       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.122.205"}
	I1007 11:56:42.392662       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.199.181"}
	I1007 11:56:42.446717       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1007 11:56:44.107288       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.185.121"}
	I1007 11:56:49.344812       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.72.112"}
	I1007 11:58:05.750042       1 controller.go:615] quota admission added evaluator for: namespaces
	I1007 11:58:06.078676       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.172.208"}
	I1007 11:58:06.102383       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.61.34"}
	
	
	==> kube-controller-manager [4178a6c4038f98ff6558bf467847c0d8e1cb20f19bc3d89db4eb328db6566c81] <==
	I1007 11:55:39.815391       1 shared_informer.go:320] Caches are synced for PVC protection
	I1007 11:55:39.815431       1 shared_informer.go:320] Caches are synced for HPA
	I1007 11:55:39.821455       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1007 11:55:39.821541       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1007 11:55:39.822738       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1007 11:55:39.822794       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1007 11:55:39.828003       1 shared_informer.go:320] Caches are synced for node
	I1007 11:55:39.828051       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1007 11:55:39.828088       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1007 11:55:39.828093       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1007 11:55:39.828098       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1007 11:55:39.828159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-790363"
	I1007 11:55:39.903128       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1007 11:55:39.917816       1 shared_informer.go:320] Caches are synced for endpoint
	I1007 11:55:39.946337       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1007 11:55:39.963964       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1007 11:55:39.964147       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1007 11:55:40.000522       1 shared_informer.go:320] Caches are synced for resource quota
	I1007 11:55:40.005091       1 shared_informer.go:320] Caches are synced for daemon sets
	I1007 11:55:40.014313       1 shared_informer.go:320] Caches are synced for stateful set
	I1007 11:55:40.014437       1 shared_informer.go:320] Caches are synced for crt configmap
	I1007 11:55:40.021690       1 shared_informer.go:320] Caches are synced for resource quota
	I1007 11:55:40.454154       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 11:55:40.515282       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 11:55:40.515379       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [d0197a3aeefefa450f9b715eeb4b74aaaaa77717c18abf83037f26abb62501eb] <==
	E1007 11:58:05.879084       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1007 11:58:05.882391       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="11.003308ms"
	E1007 11:58:05.882426       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1007 11:58:05.893639       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.872591ms"
	E1007 11:58:05.893682       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1007 11:58:05.893661       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="9.568508ms"
	E1007 11:58:05.893802       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1007 11:58:05.900663       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.338176ms"
	E1007 11:58:05.900706       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1007 11:58:05.942263       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="46.343076ms"
	I1007 11:58:05.972658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="57.289332ms"
	I1007 11:58:05.979523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="35.713716ms"
	I1007 11:58:05.981987       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="81.285µs"
	I1007 11:58:05.989654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="34.149µs"
	I1007 11:58:05.998247       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="24.83347ms"
	I1007 11:58:05.998475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="114.014µs"
	I1007 11:58:06.042723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="38.352µs"
	I1007 11:58:20.747333       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-790363"
	I1007 11:59:01.812501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="18.571201ms"
	I1007 11:59:01.812608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="41.275µs"
	I1007 11:59:04.834961       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="23.016016ms"
	I1007 11:59:04.835390       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="120.562µs"
	I1007 11:59:09.664077       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="52.908µs"
	I1007 11:59:21.313657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-790363"
	I1007 11:59:22.669895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="100.567µs"
	
	
	==> kube-proxy [19f8cf9e9c8abae44283271b2927935dd8fdc1406459daeecc22a0c82109d857] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 11:55:37.657074       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 11:55:37.683581       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.166"]
	E1007 11:55:37.683665       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:55:37.742077       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 11:55:37.742136       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 11:55:37.742160       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:55:37.745665       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:55:37.746020       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:55:37.746067       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:55:37.747429       1 config.go:199] "Starting service config controller"
	I1007 11:55:37.747730       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:55:37.747794       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:55:37.747813       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:55:37.748361       1 config.go:328] "Starting node config controller"
	I1007 11:55:37.749801       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:55:37.848805       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 11:55:37.848860       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:55:37.850372       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [2da497bf8743c332e5a6a9396e479580d1128396bc850f7d152b128d045b0142] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 11:56:19.246837       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 11:56:19.270413       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.166"]
	E1007 11:56:19.270605       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:56:19.344550       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 11:56:19.344669       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 11:56:19.344745       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:56:19.350093       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:56:19.350475       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:56:19.350889       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:56:19.352481       1 config.go:199] "Starting service config controller"
	I1007 11:56:19.352763       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:56:19.352909       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:56:19.353028       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:56:19.353632       1 config.go:328] "Starting node config controller"
	I1007 11:56:19.354376       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:56:19.452999       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:56:19.453105       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 11:56:19.454733       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5499881b9a7180c26488452535c7e046dc20fd0f9f731e391ac7068a6c6f0d39] <==
	I1007 11:55:34.854471       1 serving.go:386] Generated self-signed cert in-memory
	W1007 11:55:36.442258       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1007 11:55:36.442630       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 11:55:36.442758       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1007 11:55:36.442784       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1007 11:55:36.513773       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1007 11:55:36.513864       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:55:36.518140       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1007 11:55:36.519465       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1007 11:55:36.519511       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 11:55:36.519530       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1007 11:55:36.620514       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 11:56:03.690711       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1007 11:56:03.690764       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1007 11:56:03.690922       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f8d3c1943121b8063348f009d186760a53a3335eef313487c4478127e65ea93d] <==
	I1007 11:56:16.241751       1 serving.go:386] Generated self-signed cert in-memory
	W1007 11:56:18.214360       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1007 11:56:18.214455       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 11:56:18.214466       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1007 11:56:18.214472       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1007 11:56:18.265312       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1007 11:56:18.265358       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:56:18.267756       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1007 11:56:18.267979       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1007 11:56:18.267998       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1007 11:56:18.268755       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 11:56:18.370861       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 11:58:57 functional-790363 kubelet[5485]: I1007 11:58:57.968877    5485 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e4ea3387-0a8d-43b1-8ed0-a5caf15f672b-test-volume\") pod \"e4ea3387-0a8d-43b1-8ed0-a5caf15f672b\" (UID: \"e4ea3387-0a8d-43b1-8ed0-a5caf15f672b\") "
	Oct 07 11:58:57 functional-790363 kubelet[5485]: I1007 11:58:57.968936    5485 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snrw6\" (UniqueName: \"kubernetes.io/projected/e4ea3387-0a8d-43b1-8ed0-a5caf15f672b-kube-api-access-snrw6\") pod \"e4ea3387-0a8d-43b1-8ed0-a5caf15f672b\" (UID: \"e4ea3387-0a8d-43b1-8ed0-a5caf15f672b\") "
	Oct 07 11:58:57 functional-790363 kubelet[5485]: I1007 11:58:57.969336    5485 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4ea3387-0a8d-43b1-8ed0-a5caf15f672b-test-volume" (OuterVolumeSpecName: "test-volume") pod "e4ea3387-0a8d-43b1-8ed0-a5caf15f672b" (UID: "e4ea3387-0a8d-43b1-8ed0-a5caf15f672b"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 07 11:58:57 functional-790363 kubelet[5485]: I1007 11:58:57.984563    5485 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ea3387-0a8d-43b1-8ed0-a5caf15f672b-kube-api-access-snrw6" (OuterVolumeSpecName: "kube-api-access-snrw6") pod "e4ea3387-0a8d-43b1-8ed0-a5caf15f672b" (UID: "e4ea3387-0a8d-43b1-8ed0-a5caf15f672b"). InnerVolumeSpecName "kube-api-access-snrw6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 07 11:58:58 functional-790363 kubelet[5485]: I1007 11:58:58.069159    5485 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e4ea3387-0a8d-43b1-8ed0-a5caf15f672b-test-volume\") on node \"functional-790363\" DevicePath \"\""
	Oct 07 11:58:58 functional-790363 kubelet[5485]: I1007 11:58:58.069248    5485 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-snrw6\" (UniqueName: \"kubernetes.io/projected/e4ea3387-0a8d-43b1-8ed0-a5caf15f672b-kube-api-access-snrw6\") on node \"functional-790363\" DevicePath \"\""
	Oct 07 11:58:58 functional-790363 kubelet[5485]: I1007 11:58:58.761379    5485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a620634aa4478dd5e98f8e4f28854798a7df809f998b0946e94dde0b1ee0520d"
	Oct 07 11:59:04 functional-790363 kubelet[5485]: E1007 11:59:04.744024    5485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302344743173061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:207138,},InodesUsed:&UInt64Value{Value:101,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:59:04 functional-790363 kubelet[5485]: E1007 11:59:04.744075    5485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302344743173061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:207138,},InodesUsed:&UInt64Value{Value:101,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:59:04 functional-790363 kubelet[5485]: I1007 11:59:04.815418    5485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-srlwz" podStartSLOduration=2.576973635 podStartE2EDuration="59.815402465s" podCreationTimestamp="2024-10-07 11:58:05 +0000 UTC" firstStartedPulling="2024-10-07 11:58:06.570391927 +0000 UTC m=+112.092322287" lastFinishedPulling="2024-10-07 11:59:03.808820754 +0000 UTC m=+169.330751117" observedRunningTime="2024-10-07 11:59:04.815061594 +0000 UTC m=+170.336991976" watchObservedRunningTime="2024-10-07 11:59:04.815402465 +0000 UTC m=+170.337332844"
	Oct 07 11:59:04 functional-790363 kubelet[5485]: I1007 11:59:04.815605    5485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-5x8f4" podStartSLOduration=4.740612836 podStartE2EDuration="59.81559931s" podCreationTimestamp="2024-10-07 11:58:05 +0000 UTC" firstStartedPulling="2024-10-07 11:58:06.533614557 +0000 UTC m=+112.055544921" lastFinishedPulling="2024-10-07 11:59:01.608601027 +0000 UTC m=+167.130531395" observedRunningTime="2024-10-07 11:59:01.805799628 +0000 UTC m=+167.327730010" watchObservedRunningTime="2024-10-07 11:59:04.81559931 +0000 UTC m=+170.337529701"
	Oct 07 11:59:09 functional-790363 kubelet[5485]: E1007 11:59:09.648631    5485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-2hkb9" podUID="9c457b77-d3eb-43ee-b7b7-74a8d7c21e04"
	Oct 07 11:59:14 functional-790363 kubelet[5485]: E1007 11:59:14.694462    5485 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 11:59:14 functional-790363 kubelet[5485]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 11:59:14 functional-790363 kubelet[5485]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 11:59:14 functional-790363 kubelet[5485]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 11:59:14 functional-790363 kubelet[5485]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 11:59:14 functional-790363 kubelet[5485]: E1007 11:59:14.749890    5485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302354749067795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:59:14 functional-790363 kubelet[5485]: E1007 11:59:14.749934    5485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302354749067795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:59:24 functional-790363 kubelet[5485]: E1007 11:59:24.751916    5485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302364751440976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:59:24 functional-790363 kubelet[5485]: E1007 11:59:24.751948    5485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302364751440976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:59:34 functional-790363 kubelet[5485]: E1007 11:59:34.753919    5485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302374753524409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:59:34 functional-790363 kubelet[5485]: E1007 11:59:34.754510    5485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302374753524409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:59:44 functional-790363 kubelet[5485]: E1007 11:59:44.756301    5485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302384755859040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 11:59:44 functional-790363 kubelet[5485]: E1007 11:59:44.756593    5485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302384755859040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [4b8b49ad030cff2a9431a545949d8ceef79a7eea304fd994aa22952524c0302a] <==
	2024/10/07 11:59:01 Using namespace: kubernetes-dashboard
	2024/10/07 11:59:01 Using in-cluster config to connect to apiserver
	2024/10/07 11:59:01 Using secret token for csrf signing
	2024/10/07 11:59:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/07 11:59:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/07 11:59:01 Successful initial request to the apiserver, version: v1.31.1
	2024/10/07 11:59:01 Generating JWE encryption key
	2024/10/07 11:59:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/07 11:59:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/07 11:59:01 Initializing JWE encryption key from synchronized object
	2024/10/07 11:59:01 Creating in-cluster Sidecar client
	2024/10/07 11:59:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 11:59:01 Serving insecurely on HTTP port: 9090
	2024/10/07 11:59:31 Successful request to sidecar
	2024/10/07 11:59:01 Starting overwatch
	
	
	==> storage-provisioner [123e6d68f4398bdebd16c2654551dbce1a0754fbedd91ba4e1e9a9ed7cb57458] <==
	I1007 11:55:37.519097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 11:55:37.559730       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 11:55:37.560100       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 11:55:55.000816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 11:55:55.001247       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8efe579f-3215-4c7a-8c92-097839d8037b", APIVersion:"v1", ResourceVersion:"553", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-790363_b8ea66ad-baa3-4ee4-83a2-ee3ef6cc2645 became leader
	I1007 11:55:55.001713       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-790363_b8ea66ad-baa3-4ee4-83a2-ee3ef6cc2645!
	I1007 11:55:55.102925       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-790363_b8ea66ad-baa3-4ee4-83a2-ee3ef6cc2645!
	
	
	==> storage-provisioner [fb598880eecba2c5c8c2f41f8884e222d259ef0d90575dc6ec4d33499f7460af] <==
	I1007 11:56:19.057273       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 11:56:19.104009       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 11:56:19.104119       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 11:56:36.507628       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 11:56:36.507836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-790363_8378ba94-1c35-463a-a3c7-623f90fbb37a!
	I1007 11:56:36.508162       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8efe579f-3215-4c7a-8c92-097839d8037b", APIVersion:"v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-790363_8378ba94-1c35-463a-a3c7-623f90fbb37a became leader
	I1007 11:56:36.608813       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-790363_8378ba94-1c35-463a-a3c7-623f90fbb37a!
	I1007 11:56:48.920700       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1007 11:56:48.921347       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"8e361877-6451-471a-9a14-0c0a728fdf66", APIVersion:"v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1007 11:56:48.920951       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    4a316bda-2edb-423a-bc91-28ef99de7d57 378 0 2024-10-07 11:54:23 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-07 11:54:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-8e361877-6451-471a-9a14-0c0a728fdf66 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  8e361877-6451-471a-9a14-0c0a728fdf66 731 0 2024-10-07 11:56:48 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-07 11:56:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-07 11:56:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1007 11:56:48.940275       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-8e361877-6451-471a-9a14-0c0a728fdf66" provisioned
	I1007 11:56:48.940388       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1007 11:56:48.940405       1 volume_store.go:212] Trying to save persistentvolume "pvc-8e361877-6451-471a-9a14-0c0a728fdf66"
	I1007 11:56:48.982259       1 volume_store.go:219] persistentvolume "pvc-8e361877-6451-471a-9a14-0c0a728fdf66" saved
	I1007 11:56:48.982548       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"8e361877-6451-471a-9a14-0c0a728fdf66", APIVersion:"v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-8e361877-6451-471a-9a14-0c0a728fdf66
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-790363 -n functional-790363
helpers_test.go:261: (dbg) Run:  kubectl --context functional-790363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-2hkb9 sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-790363 describe pod busybox-mount mysql-6cdb49bbb-2hkb9 sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-790363 describe pod busybox-mount mysql-6cdb49bbb-2hkb9 sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-790363/192.168.39.166
	Start Time:       Mon, 07 Oct 2024 11:58:04 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://37d6481ff3fb2759693be28c18e54fa70a286b4079665dc7b31db6a006499650
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 07 Oct 2024 11:58:56 +0000
	      Finished:     Mon, 07 Oct 2024 11:58:56 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-snrw6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-snrw6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  109s  default-scheduler  Successfully assigned default/busybox-mount to functional-790363
	  Normal  Pulling    109s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     58s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.184s (50.999s including waiting). Image size: 4631262 bytes.
	  Normal  Created    57s   kubelet            Created container mount-munger
	  Normal  Started    57s   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-2hkb9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-790363/192.168.39.166
	Start Time:       Mon, 07 Oct 2024 11:56:42 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h9t89 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-h9t89:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m11s                default-scheduler  Successfully assigned default/mysql-6cdb49bbb-2hkb9 to functional-790363
	  Warning  Failed     2m40s                kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     59s (x2 over 2m40s)  kubelet            Error: ErrImagePull
	  Warning  Failed     59s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    44s (x2 over 2m39s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     44s (x2 over 2m39s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    31s (x3 over 3m10s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-790363/192.168.39.166
	Start Time:       Mon, 07 Oct 2024 11:56:50 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qhzl4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-qhzl4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m2s                default-scheduler  Successfully assigned default/sp-pod to functional-790363
	  Warning  Failed     89s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     89s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    89s                 kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     89s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    77s (x2 over 3m2s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E1007 12:00:01.380584  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:00:29.087061  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:05:01.380194  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (190.08s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (603.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-790363 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-2hkb9" [9c457b77-d3eb-43ee-b7b7-74a8d7c21e04] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-790363 -n functional-790363
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-10-07 12:06:42.740434765 +0000 UTC m=+2121.359018779
functional_test.go:1799: (dbg) Run:  kubectl --context functional-790363 describe po mysql-6cdb49bbb-2hkb9 -n default
functional_test.go:1799: (dbg) kubectl --context functional-790363 describe po mysql-6cdb49bbb-2hkb9 -n default:
Name:             mysql-6cdb49bbb-2hkb9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-790363/192.168.39.166
Start Time:       Mon, 07 Oct 2024 11:56:42 +0000
Labels:           app=mysql
pod-template-hash=6cdb49bbb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/mysql-6cdb49bbb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h9t89 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-h9t89:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-6cdb49bbb-2hkb9 to functional-790363
Warning  Failed     6m7s (x2 over 7m48s)   kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    5m18s (x4 over 9m59s)  kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     4m47s (x2 over 9m29s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     4m47s (x4 over 9m29s)  kubelet            Error: ErrImagePull
Normal   BackOff    4m23s (x7 over 9m28s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     4m23s (x7 over 9m28s)  kubelet            Error: ImagePullBackOff
functional_test.go:1799: (dbg) Run:  kubectl --context functional-790363 logs mysql-6cdb49bbb-2hkb9 -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-790363 logs mysql-6cdb49bbb-2hkb9 -n default: exit status 1 (72.391829ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-6cdb49bbb-2hkb9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-790363 logs mysql-6cdb49bbb-2hkb9 -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-790363 -n functional-790363
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-790363 logs -n 25: (1.689281841s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:58 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                 |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | -T /mount-9p | grep 9p                                                 |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh -- ls                                            | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | -la /mount-9p                                                          |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh sudo                                             | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | umount -f /mount-9p                                                    |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | -T /mount1                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-790363                                                   | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-790363                                                   | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-790363                                                   | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | -T /mount1                                                             |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | -T /mount2                                                             |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh findmnt                                          | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | -T /mount3                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-790363                                                   | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | --kill=true                                                            |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh sudo                                             | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | systemctl is-active docker                                             |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh sudo                                             | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | systemctl is-active containerd                                         |                   |         |         |                     |                     |
	| license        |                                                                        | minikube          | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	| image          | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | image ls --format short                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | image ls --format yaml                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| ssh            | functional-790363 ssh pgrep                                            | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC |                     |
	|                | buildkitd                                                              |                   |         |         |                     |                     |
	| image          | functional-790363 image build -t                                       | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | localhost/my-image:functional-790363                                   |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                   |         |         |                     |                     |
	| image          | functional-790363 image ls                                             | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	| image          | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | image ls --format json                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | image ls --format table                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| update-context | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-790363                                                      | functional-790363 | jenkins | v1.34.0 | 07 Oct 24 11:59 UTC | 07 Oct 24 11:59 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:58:04
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:58:04.593359  398541 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:58:04.593500  398541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:58:04.593511  398541 out.go:358] Setting ErrFile to fd 2...
	I1007 11:58:04.593517  398541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:58:04.593725  398541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 11:58:04.594283  398541 out.go:352] Setting JSON to false
	I1007 11:58:04.595347  398541 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6031,"bootTime":1728296254,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:58:04.595465  398541 start.go:139] virtualization: kvm guest
	I1007 11:58:04.597752  398541 out.go:177] * [functional-790363] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:58:04.599094  398541 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 11:58:04.599150  398541 notify.go:220] Checking for updates...
	I1007 11:58:04.601722  398541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:58:04.603124  398541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:58:04.604480  398541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:58:04.605664  398541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:58:04.606987  398541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:58:04.608903  398541 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:58:04.609517  398541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:58:04.609613  398541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:58:04.625104  398541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45885
	I1007 11:58:04.625527  398541 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:58:04.626120  398541 main.go:141] libmachine: Using API Version  1
	I1007 11:58:04.626140  398541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:58:04.626512  398541 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:58:04.626689  398541 main.go:141] libmachine: (functional-790363) Calling .DriverName
	I1007 11:58:04.626923  398541 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:58:04.627246  398541 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:58:04.627283  398541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:58:04.642432  398541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I1007 11:58:04.642909  398541 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:58:04.643423  398541 main.go:141] libmachine: Using API Version  1
	I1007 11:58:04.643449  398541 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:58:04.643785  398541 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:58:04.643980  398541 main.go:141] libmachine: (functional-790363) Calling .DriverName
	I1007 11:58:04.678306  398541 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 11:58:04.679688  398541 start.go:297] selected driver: kvm2
	I1007 11:58:04.679703  398541 start.go:901] validating driver "kvm2" against &{Name:functional-790363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-790363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:58:04.679811  398541 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:58:04.680770  398541 cni.go:84] Creating CNI manager for ""
	I1007 11:58:04.680826  398541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:58:04.680876  398541 start.go:340] cluster config:
	{Name:functional-790363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-790363 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:58:04.682553  398541 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.600534217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302803600508549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc2b4e29-230c-4eff-808f-f1a3c2416d3b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.601386931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a15e35cb-faef-4d02-9289-9aafa4cd9d1d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.601466399Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a15e35cb-faef-4d02-9289-9aafa4cd9d1d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.601885487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89f8fb2e372535ca6f1f7185e5511386ab5e1bab5f70ad0d52c06519e6aadb63,PodSandboxId:1e9cdb2c412a06df333e1ab43a58851cd733c94afcc9407933f6ed141776b9ef,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1728302343825596773,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-srlwz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d36cc72a-f7ca-4393-b599-82d95feaaa06,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8b49ad030cff2a9431a545949d8ceef79a7eea304fd994aa22952524c0302a,PodSandboxId:60e4f8e51947e63a3eec7ebd3a92f11d6d82d785d491c3d70b111059aa1d438d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1728302341634466255,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-5x8f4,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5d581836-45b4-4fe4-bf4a-1a99bbb5c5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d6481ff3fb2759693be28c18e54fa70a286b4079665dc7b31db6a006499650,PodSandboxId:a620634aa4478dd5e98f8e4f28854798a7df809f998b0946e94dde0b1ee0520d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1728302335940065096,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4ea3387-0a8d-43b1-8ed0-a5caf15f672b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:772145244fb5b4ad69814782e6f61a663b885d904999512e589a1973e87c7ddb,PodSandboxId:5884a4903268b35ce628b627c63612151a18a20f86652a13e3a1e8486c2ce577,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302275663264088,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-nnv6b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa888d7c-ba75-4424-b0f9-0b53ef6e15d2,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9cdda99c8ad24dc25e8fa6479550da461cb07d2333b51693cee96208f660919,PodSandboxId:93b684576603d362a3319e2204151f37a933976c06f4c4feb2f1054e6a889299,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302273260854167,Labels:map[string]string{io.kubernetes.container.na
me: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-rzmtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd3c63a-e3a9-48e4-b35c-6eeb69b38295,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da497bf8743c332e5a6a9396e479580d1128396bc850f7d152b128d045b0142,PodSandboxId:f0d8d612a1ec8424515c96165c2e98153ccc57c0adbd899ad439f28c5800c251,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302178907479114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52d08247d6ff1b34e5e4b288f37fc9a19c77ab57506233ff65db5cbed9a5f3e,PodSandboxId:fd4b0b771c475c86aa1dea63c83c954d5dc6e0b4fdd025135f4543b142fbe94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302178919871816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb598880eecba2c5c8c2f41f8884e222d259ef0d90575dc6ec4d33499f7460af,PodSandboxId:4ce7a93b11806115eba21a52f2993355e8b9c7e288de221d63bcc82283a5ff64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302178899137904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d00430bfbcf707370f01b4a7f3ba74a3c2170a4dcd447a14fb6f30290cbdf4,PodSandboxId:9df509c87de947d3243dc27626ec5d38c1ffdde60cd2e55c75cae7ed14a1a3a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af275
7a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302175275035728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4e1143ad4f52e45d83a37fce32014,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4874f237059e817484ace6d656787502853e81d7141e1d514cd9aa5df71f60,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302175113240054,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d3c1943121b8063348f009d186760a53a3335eef313487c4478127e65ea93d,PodSandboxId:c41bc7ac169396111a662bb988b76a5feb80d693f7b939f04b5255968e0cd433,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt
:1728302175065152683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0197a3aeefefa450f9b715eeb4b74aaaaa77717c18abf83037f26abb62501eb,PodSandboxId:8fb748069241f9a61920b1ab214e10d285146b6f27901b703ff16f72419410bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172
8302175083894134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef19bca5cdcf4cd5760d287d58f51774c733dd7de17033fb8baea58d8d5fcf,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:172830217
2027993068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd7587c08707642845da1f9da47461e4c097627d532cad22ae7387d3ba9fe03,PodSandboxId:9fdf58736a072898020a400c69917a938b2b456503eb4cfcaa5ad3e703ffcb08,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728302137385436849,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f8cf9e9c8abae44283271b2927935dd8fdc1406459daeecc22a0c82109d857,PodSandboxId:6a5700facfbfc0947e103d7114bdc5e721e8b8c0655b7a9a22c1c8e7dbec6443,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728302137369099169,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e6d68f4398bdebd16c2654551dbce1a0754fbedd91ba4e1e9a9ed7cb57458,PodSandboxId:f498dfe1c9da7c83965d715c9ee1cce6204872f686af710abddcb6d4ca7c83b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728302137327928762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5499881b9a7180c26488452535c7e046dc20fd0f9f731e391ac7068a6c6f0d39,PodSandboxId:79a71698b535f3f4cfead8219422c4d9db0f18141c6ef8af7e29119705cae29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728302133579969355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4178a6c4038f98ff6558bf467847c0d8e1cb20f19bc3d89db4eb328db6566c81,PodSandboxId:e3d24e3aa199e18f641e477dc448159ac5e82cb14d0735d2ff93801e20362623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302133558770973,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecd2fc86fc14f9839ea531d8a496b2b394f3f9e3089fd73f67210bd3a4c3b5e,PodSandboxId:25e7c2c97e166c9edabfb1fbaeca324159eb4c1806df26386356086bd2762fd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728302133547616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82665f23d6e5401737fad860192067,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a15e35cb-faef-4d02-9289-9aafa4cd9d1d name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.648401215Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63cde34b-fa1a-4ca9-8d30-510276164925 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.648474271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63cde34b-fa1a-4ca9-8d30-510276164925 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.651130522Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2995b33-526d-4166-bcb7-15396698aea5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.651843290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302803651817510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2995b33-526d-4166-bcb7-15396698aea5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.652609355Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=098a5c1f-b6a2-42bf-8ee5-93cf411e5c0a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.652666046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=098a5c1f-b6a2-42bf-8ee5-93cf411e5c0a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.653045957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89f8fb2e372535ca6f1f7185e5511386ab5e1bab5f70ad0d52c06519e6aadb63,PodSandboxId:1e9cdb2c412a06df333e1ab43a58851cd733c94afcc9407933f6ed141776b9ef,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1728302343825596773,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-srlwz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d36cc72a-f7ca-4393-b599-82d95feaaa06,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8b49ad030cff2a9431a545949d8ceef79a7eea304fd994aa22952524c0302a,PodSandboxId:60e4f8e51947e63a3eec7ebd3a92f11d6d82d785d491c3d70b111059aa1d438d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1728302341634466255,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-5x8f4,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5d581836-45b4-4fe4-bf4a-1a99bbb5c5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d6481ff3fb2759693be28c18e54fa70a286b4079665dc7b31db6a006499650,PodSandboxId:a620634aa4478dd5e98f8e4f28854798a7df809f998b0946e94dde0b1ee0520d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1728302335940065096,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4ea3387-0a8d-43b1-8ed0-a5caf15f672b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:772145244fb5b4ad69814782e6f61a663b885d904999512e589a1973e87c7ddb,PodSandboxId:5884a4903268b35ce628b627c63612151a18a20f86652a13e3a1e8486c2ce577,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302275663264088,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-nnv6b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa888d7c-ba75-4424-b0f9-0b53ef6e15d2,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9cdda99c8ad24dc25e8fa6479550da461cb07d2333b51693cee96208f660919,PodSandboxId:93b684576603d362a3319e2204151f37a933976c06f4c4feb2f1054e6a889299,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302273260854167,Labels:map[string]string{io.kubernetes.container.na
me: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-rzmtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd3c63a-e3a9-48e4-b35c-6eeb69b38295,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da497bf8743c332e5a6a9396e479580d1128396bc850f7d152b128d045b0142,PodSandboxId:f0d8d612a1ec8424515c96165c2e98153ccc57c0adbd899ad439f28c5800c251,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302178907479114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52d08247d6ff1b34e5e4b288f37fc9a19c77ab57506233ff65db5cbed9a5f3e,PodSandboxId:fd4b0b771c475c86aa1dea63c83c954d5dc6e0b4fdd025135f4543b142fbe94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302178919871816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb598880eecba2c5c8c2f41f8884e222d259ef0d90575dc6ec4d33499f7460af,PodSandboxId:4ce7a93b11806115eba21a52f2993355e8b9c7e288de221d63bcc82283a5ff64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302178899137904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d00430bfbcf707370f01b4a7f3ba74a3c2170a4dcd447a14fb6f30290cbdf4,PodSandboxId:9df509c87de947d3243dc27626ec5d38c1ffdde60cd2e55c75cae7ed14a1a3a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af275
7a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302175275035728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4e1143ad4f52e45d83a37fce32014,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4874f237059e817484ace6d656787502853e81d7141e1d514cd9aa5df71f60,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302175113240054,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d3c1943121b8063348f009d186760a53a3335eef313487c4478127e65ea93d,PodSandboxId:c41bc7ac169396111a662bb988b76a5feb80d693f7b939f04b5255968e0cd433,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt
:1728302175065152683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0197a3aeefefa450f9b715eeb4b74aaaaa77717c18abf83037f26abb62501eb,PodSandboxId:8fb748069241f9a61920b1ab214e10d285146b6f27901b703ff16f72419410bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172
8302175083894134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef19bca5cdcf4cd5760d287d58f51774c733dd7de17033fb8baea58d8d5fcf,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:172830217
2027993068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd7587c08707642845da1f9da47461e4c097627d532cad22ae7387d3ba9fe03,PodSandboxId:9fdf58736a072898020a400c69917a938b2b456503eb4cfcaa5ad3e703ffcb08,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728302137385436849,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f8cf9e9c8abae44283271b2927935dd8fdc1406459daeecc22a0c82109d857,PodSandboxId:6a5700facfbfc0947e103d7114bdc5e721e8b8c0655b7a9a22c1c8e7dbec6443,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728302137369099169,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e6d68f4398bdebd16c2654551dbce1a0754fbedd91ba4e1e9a9ed7cb57458,PodSandboxId:f498dfe1c9da7c83965d715c9ee1cce6204872f686af710abddcb6d4ca7c83b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728302137327928762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5499881b9a7180c26488452535c7e046dc20fd0f9f731e391ac7068a6c6f0d39,PodSandboxId:79a71698b535f3f4cfead8219422c4d9db0f18141c6ef8af7e29119705cae29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728302133579969355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4178a6c4038f98ff6558bf467847c0d8e1cb20f19bc3d89db4eb328db6566c81,PodSandboxId:e3d24e3aa199e18f641e477dc448159ac5e82cb14d0735d2ff93801e20362623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302133558770973,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecd2fc86fc14f9839ea531d8a496b2b394f3f9e3089fd73f67210bd3a4c3b5e,PodSandboxId:25e7c2c97e166c9edabfb1fbaeca324159eb4c1806df26386356086bd2762fd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728302133547616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82665f23d6e5401737fad860192067,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=098a5c1f-b6a2-42bf-8ee5-93cf411e5c0a name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.692071138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=724b9e77-ffce-4278-93a6-eb466a016407 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.692165237Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=724b9e77-ffce-4278-93a6-eb466a016407 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.693499211Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f38bc8a-5187-47c1-b724-6b09364dbfb2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.694333669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302803694301880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f38bc8a-5187-47c1-b724-6b09364dbfb2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.695050295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd556941-87a7-47a2-afc5-1b42ae623d61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.695229610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd556941-87a7-47a2-afc5-1b42ae623d61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.695659086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89f8fb2e372535ca6f1f7185e5511386ab5e1bab5f70ad0d52c06519e6aadb63,PodSandboxId:1e9cdb2c412a06df333e1ab43a58851cd733c94afcc9407933f6ed141776b9ef,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1728302343825596773,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-srlwz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d36cc72a-f7ca-4393-b599-82d95feaaa06,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8b49ad030cff2a9431a545949d8ceef79a7eea304fd994aa22952524c0302a,PodSandboxId:60e4f8e51947e63a3eec7ebd3a92f11d6d82d785d491c3d70b111059aa1d438d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1728302341634466255,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-5x8f4,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5d581836-45b4-4fe4-bf4a-1a99bbb5c5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d6481ff3fb2759693be28c18e54fa70a286b4079665dc7b31db6a006499650,PodSandboxId:a620634aa4478dd5e98f8e4f28854798a7df809f998b0946e94dde0b1ee0520d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1728302335940065096,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4ea3387-0a8d-43b1-8ed0-a5caf15f672b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:772145244fb5b4ad69814782e6f61a663b885d904999512e589a1973e87c7ddb,PodSandboxId:5884a4903268b35ce628b627c63612151a18a20f86652a13e3a1e8486c2ce577,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302275663264088,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-nnv6b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa888d7c-ba75-4424-b0f9-0b53ef6e15d2,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9cdda99c8ad24dc25e8fa6479550da461cb07d2333b51693cee96208f660919,PodSandboxId:93b684576603d362a3319e2204151f37a933976c06f4c4feb2f1054e6a889299,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302273260854167,Labels:map[string]string{io.kubernetes.container.na
me: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-rzmtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd3c63a-e3a9-48e4-b35c-6eeb69b38295,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da497bf8743c332e5a6a9396e479580d1128396bc850f7d152b128d045b0142,PodSandboxId:f0d8d612a1ec8424515c96165c2e98153ccc57c0adbd899ad439f28c5800c251,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302178907479114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52d08247d6ff1b34e5e4b288f37fc9a19c77ab57506233ff65db5cbed9a5f3e,PodSandboxId:fd4b0b771c475c86aa1dea63c83c954d5dc6e0b4fdd025135f4543b142fbe94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302178919871816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb598880eecba2c5c8c2f41f8884e222d259ef0d90575dc6ec4d33499f7460af,PodSandboxId:4ce7a93b11806115eba21a52f2993355e8b9c7e288de221d63bcc82283a5ff64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302178899137904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d00430bfbcf707370f01b4a7f3ba74a3c2170a4dcd447a14fb6f30290cbdf4,PodSandboxId:9df509c87de947d3243dc27626ec5d38c1ffdde60cd2e55c75cae7ed14a1a3a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af275
7a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302175275035728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4e1143ad4f52e45d83a37fce32014,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4874f237059e817484ace6d656787502853e81d7141e1d514cd9aa5df71f60,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302175113240054,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d3c1943121b8063348f009d186760a53a3335eef313487c4478127e65ea93d,PodSandboxId:c41bc7ac169396111a662bb988b76a5feb80d693f7b939f04b5255968e0cd433,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt
:1728302175065152683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0197a3aeefefa450f9b715eeb4b74aaaaa77717c18abf83037f26abb62501eb,PodSandboxId:8fb748069241f9a61920b1ab214e10d285146b6f27901b703ff16f72419410bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172
8302175083894134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef19bca5cdcf4cd5760d287d58f51774c733dd7de17033fb8baea58d8d5fcf,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:172830217
2027993068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd7587c08707642845da1f9da47461e4c097627d532cad22ae7387d3ba9fe03,PodSandboxId:9fdf58736a072898020a400c69917a938b2b456503eb4cfcaa5ad3e703ffcb08,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728302137385436849,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f8cf9e9c8abae44283271b2927935dd8fdc1406459daeecc22a0c82109d857,PodSandboxId:6a5700facfbfc0947e103d7114bdc5e721e8b8c0655b7a9a22c1c8e7dbec6443,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728302137369099169,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e6d68f4398bdebd16c2654551dbce1a0754fbedd91ba4e1e9a9ed7cb57458,PodSandboxId:f498dfe1c9da7c83965d715c9ee1cce6204872f686af710abddcb6d4ca7c83b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728302137327928762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5499881b9a7180c26488452535c7e046dc20fd0f9f731e391ac7068a6c6f0d39,PodSandboxId:79a71698b535f3f4cfead8219422c4d9db0f18141c6ef8af7e29119705cae29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728302133579969355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4178a6c4038f98ff6558bf467847c0d8e1cb20f19bc3d89db4eb328db6566c81,PodSandboxId:e3d24e3aa199e18f641e477dc448159ac5e82cb14d0735d2ff93801e20362623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302133558770973,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecd2fc86fc14f9839ea531d8a496b2b394f3f9e3089fd73f67210bd3a4c3b5e,PodSandboxId:25e7c2c97e166c9edabfb1fbaeca324159eb4c1806df26386356086bd2762fd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728302133547616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82665f23d6e5401737fad860192067,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd556941-87a7-47a2-afc5-1b42ae623d61 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.742403843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84433cc4-0108-47d5-a21a-00be1f7bf1e3 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.742496007Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84433cc4-0108-47d5-a21a-00be1f7bf1e3 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.743845925Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04c577bb-d6d3-4189-9dd8-5d9e3118ae87 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.744647565Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302803744614904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04c577bb-d6d3-4189-9dd8-5d9e3118ae87 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.745472736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1404f82-2c08-4400-8ff4-7bb638bce414 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.745565115Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1404f82-2c08-4400-8ff4-7bb638bce414 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:06:43 functional-790363 crio[4846]: time="2024-10-07 12:06:43.745961742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:89f8fb2e372535ca6f1f7185e5511386ab5e1bab5f70ad0d52c06519e6aadb63,PodSandboxId:1e9cdb2c412a06df333e1ab43a58851cd733c94afcc9407933f6ed141776b9ef,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1728302343825596773,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-c5db448b4-srlwz,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d36cc72a-f7ca-4393-b599-82d95feaaa06,},Annotations:map[string]string{io.kube
rnetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b8b49ad030cff2a9431a545949d8ceef79a7eea304fd994aa22952524c0302a,PodSandboxId:60e4f8e51947e63a3eec7ebd3a92f11d6d82d785d491c3d70b111059aa1d438d,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1728302341634466255,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-695b96c756-5x8f4,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 5d581836-45b4-4fe4-bf4a-1a99bbb5c5e1,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37d6481ff3fb2759693be28c18e54fa70a286b4079665dc7b31db6a006499650,PodSandboxId:a620634aa4478dd5e98f8e4f28854798a7df809f998b0946e94dde0b1ee0520d,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1728302335940065096,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4ea3387-0a8d-43b1-8ed0-a5caf15f672b,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:772145244fb5b4ad69814782e6f61a663b885d904999512e589a1973e87c7ddb,PodSandboxId:5884a4903268b35ce628b627c63612151a18a20f86652a13e3a1e8486c2ce577,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302275663264088,Labels:map[string]string{io.kubernetes.container.name: echoserver,i
o.kubernetes.pod.name: hello-node-connect-67bdd5bbb4-nnv6b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fa888d7c-ba75-4424-b0f9-0b53ef6e15d2,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9cdda99c8ad24dc25e8fa6479550da461cb07d2333b51693cee96208f660919,PodSandboxId:93b684576603d362a3319e2204151f37a933976c06f4c4feb2f1054e6a889299,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1728302273260854167,Labels:map[string]string{io.kubernetes.container.na
me: echoserver,io.kubernetes.pod.name: hello-node-6b9f76b5c7-rzmtr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2dd3c63a-e3a9-48e4-b35c-6eeb69b38295,},Annotations:map[string]string{io.kubernetes.container.hash: aa985672,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da497bf8743c332e5a6a9396e479580d1128396bc850f7d152b128d045b0142,PodSandboxId:f0d8d612a1ec8424515c96165c2e98153ccc57c0adbd899ad439f28c5800c251,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302178907479114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernet
es.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b52d08247d6ff1b34e5e4b288f37fc9a19c77ab57506233ff65db5cbed9a5f3e,PodSandboxId:fd4b0b771c475c86aa1dea63c83c954d5dc6e0b4fdd025135f4543b142fbe94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302178919871816,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb598880eecba2c5c8c2f41f8884e222d259ef0d90575dc6ec4d33499f7460af,PodSandboxId:4ce7a93b11806115eba21a52f2993355e8b9c7e288de221d63bcc82283a5ff64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342
c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302178899137904,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0d00430bfbcf707370f01b4a7f3ba74a3c2170a4dcd447a14fb6f30290cbdf4,PodSandboxId:9df509c87de947d3243dc27626ec5d38c1ffdde60cd2e55c75cae7ed14a1a3a8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af275
7a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302175275035728,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4e1143ad4f52e45d83a37fce32014,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae4874f237059e817484ace6d656787502853e81d7141e1d514cd9aa5df71f60,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,
State:CONTAINER_RUNNING,CreatedAt:1728302175113240054,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d3c1943121b8063348f009d186760a53a3335eef313487c4478127e65ea93d,PodSandboxId:c41bc7ac169396111a662bb988b76a5feb80d693f7b939f04b5255968e0cd433,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt
:1728302175065152683,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0197a3aeefefa450f9b715eeb4b74aaaaa77717c18abf83037f26abb62501eb,PodSandboxId:8fb748069241f9a61920b1ab214e10d285146b6f27901b703ff16f72419410bb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172
8302175083894134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ef19bca5cdcf4cd5760d287d58f51774c733dd7de17033fb8baea58d8d5fcf,PodSandboxId:4adb904f233af616d5b87e540bd3ab11809dc788052203894a7f629507c59c68,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:172830217
2027993068,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c38effeb0c36a4625ce02590486a8719,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcd7587c08707642845da1f9da47461e4c097627d532cad22ae7387d3ba9fe03,PodSandboxId:9fdf58736a072898020a400c69917a938b2b456503eb4cfcaa5ad3e703ffcb08,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728302137385436849,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2cmgd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ea17897-c4f2-446b-8d1f-0ec5d38d0e4b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19f8cf9e9c8abae44283271b2927935dd8fdc1406459daeecc22a0c82109d857,PodSandboxId:6a5700facfbfc0947e103d7114bdc5e721e8b8c0655b7a9a22c1c8e7dbec6443,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{
},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728302137369099169,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tg2xd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be36fd-489c-4736-ba11-30583fcd0e1c,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:123e6d68f4398bdebd16c2654551dbce1a0754fbedd91ba4e1e9a9ed7cb57458,PodSandboxId:f498dfe1c9da7c83965d715c9ee1cce6204872f686af710abddcb6d4ca7c83b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728302137327928762,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d337753-0806-4e51-8df7-1d6a0ef08ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5499881b9a7180c26488452535c7e046dc20fd0f9f731e391ac7068a6c6f0d39,PodSandboxId:79a71698b535f3f4cfead8219422c4d9db0f18141c6ef8af7e29119705cae29d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728302133579969355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aae0604b68928d4dc4e5f2972bb9cee5,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4178a6c4038f98ff6558bf467847c0d8e1cb20f19bc3d89db4eb328db6566c81,PodSandboxId:e3d24e3aa199e18f641e477dc448159ac5e82cb14d0735d2ff93801e20362623,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef
:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728302133558770973,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a302002d7348773b6fa2080b0fa8ca7,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ecd2fc86fc14f9839ea531d8a496b2b394f3f9e3089fd73f67210bd3a4c3b5e,PodSandboxId:25e7c2c97e166c9edabfb1fbaeca324159eb4c1806df26386356086bd2762fd8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728302133547616394,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-790363,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b82665f23d6e5401737fad860192067,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1404f82-2c08-4400-8ff4-7bb638bce414 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	89f8fb2e37253       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   7 minutes ago       Running             dashboard-metrics-scraper   0                   1e9cdb2c412a0       dashboard-metrics-scraper-c5db448b4-srlwz
	4b8b49ad030cf       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         7 minutes ago       Running             kubernetes-dashboard        0                   60e4f8e51947e       kubernetes-dashboard-695b96c756-5x8f4
	37d6481ff3fb2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              7 minutes ago       Exited              mount-munger                0                   a620634aa4478       busybox-mount
	772145244fb5b       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 8 minutes ago       Running             echoserver                  0                   5884a4903268b       hello-node-connect-67bdd5bbb4-nnv6b
	e9cdda99c8ad2       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               8 minutes ago       Running             echoserver                  0                   93b684576603d       hello-node-6b9f76b5c7-rzmtr
	b52d08247d6ff       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     3                   fd4b0b771c475       coredns-7c65d6cfc9-2cmgd
	2da497bf8743c       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 10 minutes ago      Running             kube-proxy                  3                   f0d8d612a1ec8       kube-proxy-tg2xd
	fb598880eecba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   4ce7a93b11806       storage-provisioner
	a0d00430bfbcf       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 10 minutes ago      Running             kube-apiserver              0                   9df509c87de94       kube-apiserver-functional-790363
	ae4874f237059       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 10 minutes ago      Running             etcd                        4                   4adb904f233af       etcd-functional-790363
	d0197a3aeefef       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 10 minutes ago      Running             kube-controller-manager     3                   8fb748069241f       kube-controller-manager-functional-790363
	f8d3c1943121b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 10 minutes ago      Running             kube-scheduler              3                   c41bc7ac16939       kube-scheduler-functional-790363
	c9ef19bca5cdc       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 10 minutes ago      Exited              etcd                        3                   4adb904f233af       etcd-functional-790363
	fcd7587c08707       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Exited              coredns                     2                   9fdf58736a072       coredns-7c65d6cfc9-2cmgd
	19f8cf9e9c8ab       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 11 minutes ago      Exited              kube-proxy                  2                   6a5700facfbfc       kube-proxy-tg2xd
	123e6d68f4398       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         2                   f498dfe1c9da7       storage-provisioner
	5499881b9a718       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 11 minutes ago      Exited              kube-scheduler              2                   79a71698b535f       kube-scheduler-functional-790363
	4178a6c4038f9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 11 minutes ago      Exited              kube-controller-manager     2                   e3d24e3aa199e       kube-controller-manager-functional-790363
	8ecd2fc86fc14       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 11 minutes ago      Exited              kube-apiserver              2                   25e7c2c97e166       kube-apiserver-functional-790363
	
	
	==> coredns [b52d08247d6ff1b34e5e4b288f37fc9a19c77ab57506233ff65db5cbed9a5f3e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49893 - 19346 "HINFO IN 5999558149317467541.5755056667726841505. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029745842s
	
	
	==> coredns [fcd7587c08707642845da1f9da47461e4c097627d532cad22ae7387d3ba9fe03] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35672 - 53728 "HINFO IN 2885700761022076205.1762642202912000522. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030356774s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-790363
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-790363
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=functional-790363
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T11_54_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 11:54:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-790363
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:06:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:04:26 +0000   Mon, 07 Oct 2024 11:54:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:04:26 +0000   Mon, 07 Oct 2024 11:54:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:04:26 +0000   Mon, 07 Oct 2024 11:54:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:04:26 +0000   Mon, 07 Oct 2024 11:54:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.166
	  Hostname:    functional-790363
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 930e7722ab134e61a0cfaa7c4b722ea5
	  System UUID:                930e7722-ab13-4e61-a0cf-aa7c4b722ea5
	  Boot ID:                    c12af0d0-5adc-4ca8-ac1c-3fdaa7c5a465
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-rzmtr                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  default                     hello-node-connect-67bdd5bbb4-nnv6b          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6cdb49bbb-2hkb9                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (18%)    10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-7c65d6cfc9-2cmgd                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-functional-790363                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-790363             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-790363    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-tg2xd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-790363             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-srlwz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-5x8f4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-790363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-790363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-790363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-790363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-790363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-790363 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeReady                12m                kubelet          Node functional-790363 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node functional-790363 event: Registered Node functional-790363 in Controller
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-790363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-790363 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-790363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-790363 event: Registered Node functional-790363 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-790363 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-790363 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-790363 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-790363 event: Registered Node functional-790363 in Controller
	
	
	==> dmesg <==
	[  +0.300678] systemd-fstab-generator[2403]: Ignoring "noauto" option for root device
	[  +0.724713] systemd-fstab-generator[2521]: Ignoring "noauto" option for root device
	[  +9.086142] kauditd_printk_skb: 207 callbacks suppressed
	[ +14.497657] systemd-fstab-generator[3470]: Ignoring "noauto" option for root device
	[  +4.614481] kauditd_printk_skb: 38 callbacks suppressed
	[ +14.717633] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +0.090469] kauditd_printk_skb: 9 callbacks suppressed
	[Oct 7 11:56] systemd-fstab-generator[4770]: Ignoring "noauto" option for root device
	[  +0.076580] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.060404] systemd-fstab-generator[4782]: Ignoring "noauto" option for root device
	[  +0.177016] systemd-fstab-generator[4796]: Ignoring "noauto" option for root device
	[  +0.137591] systemd-fstab-generator[4808]: Ignoring "noauto" option for root device
	[  +0.305950] systemd-fstab-generator[4836]: Ignoring "noauto" option for root device
	[  +0.812144] systemd-fstab-generator[4957]: Ignoring "noauto" option for root device
	[  +3.003030] systemd-fstab-generator[5478]: Ignoring "noauto" option for root device
	[  +0.766213] kauditd_printk_skb: 206 callbacks suppressed
	[  +6.777564] kauditd_printk_skb: 33 callbacks suppressed
	[  +9.280115] systemd-fstab-generator[6001]: Ignoring "noauto" option for root device
	[  +6.710289] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.083940] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.371810] kauditd_printk_skb: 11 callbacks suppressed
	[Oct 7 11:57] kauditd_printk_skb: 18 callbacks suppressed
	[Oct 7 11:58] kauditd_printk_skb: 4 callbacks suppressed
	[ +51.325062] kauditd_printk_skb: 32 callbacks suppressed
	[Oct 7 11:59] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [ae4874f237059e817484ace6d656787502853e81d7141e1d514cd9aa5df71f60] <==
	{"level":"info","ts":"2024-10-07T11:56:16.824860Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-07T11:56:16.824900Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c received MsgPreVoteResp from 21cab5ce19ce9e1c at term 3"}
	{"level":"info","ts":"2024-10-07T11:56:16.824938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c became candidate at term 4"}
	{"level":"info","ts":"2024-10-07T11:56:16.824963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c received MsgVoteResp from 21cab5ce19ce9e1c at term 4"}
	{"level":"info","ts":"2024-10-07T11:56:16.825002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c became leader at term 4"}
	{"level":"info","ts":"2024-10-07T11:56:16.825037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 21cab5ce19ce9e1c elected leader 21cab5ce19ce9e1c at term 4"}
	{"level":"info","ts":"2024-10-07T11:56:16.828020Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"21cab5ce19ce9e1c","local-member-attributes":"{Name:functional-790363 ClientURLs:[https://192.168.39.166:2379]}","request-path":"/0/members/21cab5ce19ce9e1c/attributes","cluster-id":"fb6a39d7926aa536","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T11:56:16.828272Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:56:16.829175Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:56:16.830515Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T11:56:16.841926Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T11:56:16.843744Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T11:56:16.843811Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T11:56:16.844573Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:56:16.845470Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.166:2379"}
	{"level":"info","ts":"2024-10-07T11:58:23.361705Z","caller":"traceutil/trace.go:171","msg":"trace[2003554841] transaction","detail":"{read_only:false; response_revision:938; number_of_response:1; }","duration":"218.918982ms","start":"2024-10-07T11:58:23.142743Z","end":"2024-10-07T11:58:23.361662Z","steps":["trace[2003554841] 'process raft request'  (duration: 218.550521ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T11:59:01.405135Z","caller":"traceutil/trace.go:171","msg":"trace[1589393935] transaction","detail":"{read_only:false; response_revision:981; number_of_response:1; }","duration":"100.534052ms","start":"2024-10-07T11:59:01.304569Z","end":"2024-10-07T11:59:01.405103Z","steps":["trace[1589393935] 'process raft request'  (duration: 100.435851ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T11:59:35.021042Z","caller":"traceutil/trace.go:171","msg":"trace[395090206] linearizableReadLoop","detail":"{readStateIndex:1139; appliedIndex:1138; }","duration":"262.747133ms","start":"2024-10-07T11:59:34.758279Z","end":"2024-10-07T11:59:35.021026Z","steps":["trace[395090206] 'read index received'  (duration: 260.709741ms)","trace[395090206] 'applied index is now lower than readState.Index'  (duration: 2.03432ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-07T11:59:35.021305Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.947794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:59:35.021424Z","caller":"traceutil/trace.go:171","msg":"trace[877576994] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1032; }","duration":"263.114657ms","start":"2024-10-07T11:59:34.758272Z","end":"2024-10-07T11:59:35.021387Z","steps":["trace[877576994] 'agreement among raft nodes before linearized reading'  (duration: 262.876412ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-07T11:59:35.021478Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.039484ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-07T11:59:35.021517Z","caller":"traceutil/trace.go:171","msg":"trace[294892273] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1032; }","duration":"131.089958ms","start":"2024-10-07T11:59:34.890416Z","end":"2024-10-07T11:59:35.021506Z","steps":["trace[294892273] 'agreement among raft nodes before linearized reading'  (duration: 130.945159ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-07T12:06:16.866461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1133}
	{"level":"info","ts":"2024-10-07T12:06:16.896838Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1133,"took":"29.846324ms","hash":2073227937,"current-db-size-bytes":3899392,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":1703936,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-10-07T12:06:16.896985Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2073227937,"revision":1133,"compact-revision":-1}
	
	
	==> etcd [c9ef19bca5cdcf4cd5760d287d58f51774c733dd7de17033fb8baea58d8d5fcf] <==
	{"level":"info","ts":"2024-10-07T11:56:12.189884Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-10-07T11:56:12.196544Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"fb6a39d7926aa536","local-member-id":"21cab5ce19ce9e1c","commit-index":605}
	{"level":"info","ts":"2024-10-07T11:56:12.197330Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c switched to configuration voters=()"}
	{"level":"info","ts":"2024-10-07T11:56:12.197481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c became follower at term 3"}
	{"level":"info","ts":"2024-10-07T11:56:12.197680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 21cab5ce19ce9e1c [peers: [], term: 3, commit: 605, applied: 0, lastindex: 605, lastterm: 3]"}
	{"level":"warn","ts":"2024-10-07T11:56:12.202717Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-10-07T11:56:12.209830Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":562}
	{"level":"info","ts":"2024-10-07T11:56:12.215424Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-10-07T11:56:12.219734Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"21cab5ce19ce9e1c","timeout":"7s"}
	{"level":"info","ts":"2024-10-07T11:56:12.220059Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"21cab5ce19ce9e1c"}
	{"level":"info","ts":"2024-10-07T11:56:12.220095Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"21cab5ce19ce9e1c","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-07T11:56:12.220344Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-07T11:56:12.220520Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:56:12.220545Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:56:12.220553Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-07T11:56:12.220787Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"21cab5ce19ce9e1c switched to configuration voters=(2434958445348036124)"}
	{"level":"info","ts":"2024-10-07T11:56:12.220831Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fb6a39d7926aa536","local-member-id":"21cab5ce19ce9e1c","added-peer-id":"21cab5ce19ce9e1c","added-peer-peer-urls":["https://192.168.39.166:2380"]}
	{"level":"info","ts":"2024-10-07T11:56:12.220899Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fb6a39d7926aa536","local-member-id":"21cab5ce19ce9e1c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:56:12.220920Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T11:56:12.221443Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T11:56:12.224726Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-07T11:56:12.224908Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"21cab5ce19ce9e1c","initial-advertise-peer-urls":["https://192.168.39.166:2380"],"listen-peer-urls":["https://192.168.39.166:2380"],"advertise-client-urls":["https://192.168.39.166:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.166:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-07T11:56:12.224979Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T11:56:12.225074Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.166:2380"}
	{"level":"info","ts":"2024-10-07T11:56:12.225081Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.166:2380"}
	
	
	==> kernel <==
	 12:06:44 up 13 min,  0 users,  load average: 0.04, 0.25, 0.22
	Linux functional-790363 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8ecd2fc86fc14f9839ea531d8a496b2b394f3f9e3089fd73f67210bd3a4c3b5e] <==
	W1007 11:56:03.728038       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.728084       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.728121       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.728174       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729329       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729428       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729546       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729611       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729671       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729732       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729753       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729844       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.729940       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730011       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730089       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730251       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730322       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730377       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730428       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730488       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730548       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730587       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730656       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730717       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1007 11:56:03.730780       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a0d00430bfbcf707370f01b4a7f3ba74a3c2170a4dcd447a14fb6f30290cbdf4] <==
	I1007 11:56:18.255984       1 autoregister_controller.go:144] Starting autoregister controller
	I1007 11:56:18.255989       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1007 11:56:18.255994       1 cache.go:39] Caches are synced for autoregister controller
	I1007 11:56:18.257594       1 shared_informer.go:320] Caches are synced for configmaps
	E1007 11:56:18.269574       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1007 11:56:18.271284       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1007 11:56:18.277826       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1007 11:56:18.279311       1 policy_source.go:224] refreshing policies
	I1007 11:56:18.352650       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 11:56:19.149255       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1007 11:56:19.932919       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1007 11:56:19.946640       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1007 11:56:19.990522       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 11:56:20.024310       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1007 11:56:20.031327       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1007 11:56:21.810860       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 11:56:21.903960       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 11:56:37.891273       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.122.205"}
	I1007 11:56:42.392662       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.199.181"}
	I1007 11:56:42.446717       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1007 11:56:44.107288       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.185.121"}
	I1007 11:56:49.344812       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.72.112"}
	I1007 11:58:05.750042       1 controller.go:615] quota admission added evaluator for: namespaces
	I1007 11:58:06.078676       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.172.208"}
	I1007 11:58:06.102383       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.61.34"}
	
	
	==> kube-controller-manager [4178a6c4038f98ff6558bf467847c0d8e1cb20f19bc3d89db4eb328db6566c81] <==
	I1007 11:55:39.815391       1 shared_informer.go:320] Caches are synced for PVC protection
	I1007 11:55:39.815431       1 shared_informer.go:320] Caches are synced for HPA
	I1007 11:55:39.821455       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1007 11:55:39.821541       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1007 11:55:39.822738       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1007 11:55:39.822794       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1007 11:55:39.828003       1 shared_informer.go:320] Caches are synced for node
	I1007 11:55:39.828051       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1007 11:55:39.828088       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1007 11:55:39.828093       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1007 11:55:39.828098       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1007 11:55:39.828159       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-790363"
	I1007 11:55:39.903128       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1007 11:55:39.917816       1 shared_informer.go:320] Caches are synced for endpoint
	I1007 11:55:39.946337       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1007 11:55:39.963964       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1007 11:55:39.964147       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1007 11:55:40.000522       1 shared_informer.go:320] Caches are synced for resource quota
	I1007 11:55:40.005091       1 shared_informer.go:320] Caches are synced for daemon sets
	I1007 11:55:40.014313       1 shared_informer.go:320] Caches are synced for stateful set
	I1007 11:55:40.014437       1 shared_informer.go:320] Caches are synced for crt configmap
	I1007 11:55:40.021690       1 shared_informer.go:320] Caches are synced for resource quota
	I1007 11:55:40.454154       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 11:55:40.515282       1 shared_informer.go:320] Caches are synced for garbage collector
	I1007 11:55:40.515379       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [d0197a3aeefefa450f9b715eeb4b74aaaaa77717c18abf83037f26abb62501eb] <==
	I1007 11:58:05.900663       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.338176ms"
	E1007 11:58:05.900706       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1007 11:58:05.942263       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="46.343076ms"
	I1007 11:58:05.972658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="57.289332ms"
	I1007 11:58:05.979523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="35.713716ms"
	I1007 11:58:05.981987       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="81.285µs"
	I1007 11:58:05.989654       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="34.149µs"
	I1007 11:58:05.998247       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="24.83347ms"
	I1007 11:58:05.998475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="114.014µs"
	I1007 11:58:06.042723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="38.352µs"
	I1007 11:58:20.747333       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-790363"
	I1007 11:59:01.812501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="18.571201ms"
	I1007 11:59:01.812608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="41.275µs"
	I1007 11:59:04.834961       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="23.016016ms"
	I1007 11:59:04.835390       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="120.562µs"
	I1007 11:59:09.664077       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="52.908µs"
	I1007 11:59:21.313657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-790363"
	I1007 11:59:22.669895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="100.567µs"
	I1007 12:00:47.671952       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="131.506µs"
	I1007 12:00:59.665119       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="46.836µs"
	I1007 12:02:06.669317       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="196.559µs"
	I1007 12:02:19.669014       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="75.278µs"
	I1007 12:04:08.665860       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="183.309µs"
	I1007 12:04:20.666470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="62.71µs"
	I1007 12:04:26.547512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-790363"
	
	
	==> kube-proxy [19f8cf9e9c8abae44283271b2927935dd8fdc1406459daeecc22a0c82109d857] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 11:55:37.657074       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 11:55:37.683581       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.166"]
	E1007 11:55:37.683665       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:55:37.742077       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 11:55:37.742136       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 11:55:37.742160       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:55:37.745665       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:55:37.746020       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:55:37.746067       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:55:37.747429       1 config.go:199] "Starting service config controller"
	I1007 11:55:37.747730       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:55:37.747794       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:55:37.747813       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:55:37.748361       1 config.go:328] "Starting node config controller"
	I1007 11:55:37.749801       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:55:37.848805       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 11:55:37.848860       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:55:37.850372       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [2da497bf8743c332e5a6a9396e479580d1128396bc850f7d152b128d045b0142] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 11:56:19.246837       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 11:56:19.270413       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.166"]
	E1007 11:56:19.270605       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 11:56:19.344550       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 11:56:19.344669       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 11:56:19.344745       1 server_linux.go:169] "Using iptables Proxier"
	I1007 11:56:19.350093       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 11:56:19.350475       1 server.go:483] "Version info" version="v1.31.1"
	I1007 11:56:19.350889       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:56:19.352481       1 config.go:199] "Starting service config controller"
	I1007 11:56:19.352763       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 11:56:19.352909       1 config.go:105] "Starting endpoint slice config controller"
	I1007 11:56:19.353028       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 11:56:19.353632       1 config.go:328] "Starting node config controller"
	I1007 11:56:19.354376       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 11:56:19.452999       1 shared_informer.go:320] Caches are synced for service config
	I1007 11:56:19.453105       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 11:56:19.454733       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5499881b9a7180c26488452535c7e046dc20fd0f9f731e391ac7068a6c6f0d39] <==
	I1007 11:55:34.854471       1 serving.go:386] Generated self-signed cert in-memory
	W1007 11:55:36.442258       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1007 11:55:36.442630       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 11:55:36.442758       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1007 11:55:36.442784       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1007 11:55:36.513773       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1007 11:55:36.513864       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:55:36.518140       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1007 11:55:36.519465       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1007 11:55:36.519511       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 11:55:36.519530       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1007 11:55:36.620514       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 11:56:03.690711       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1007 11:56:03.690764       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1007 11:56:03.690922       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f8d3c1943121b8063348f009d186760a53a3335eef313487c4478127e65ea93d] <==
	I1007 11:56:16.241751       1 serving.go:386] Generated self-signed cert in-memory
	W1007 11:56:18.214360       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1007 11:56:18.214455       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 11:56:18.214466       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1007 11:56:18.214472       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1007 11:56:18.265312       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1007 11:56:18.265358       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 11:56:18.267756       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1007 11:56:18.267979       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1007 11:56:18.267998       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1007 11:56:18.268755       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 11:56:18.370861       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 12:05:44 functional-790363 kubelet[5485]: E1007 12:05:44.864920    5485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302744864524829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:05:44 functional-790363 kubelet[5485]: E1007 12:05:44.864962    5485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302744864524829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:05:46 functional-790363 kubelet[5485]: E1007 12:05:46.648426    5485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="25367271-f008-4500-8e41-4f290db932a2"
	Oct 07 12:05:54 functional-790363 kubelet[5485]: E1007 12:05:54.867117    5485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302754866760577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:05:54 functional-790363 kubelet[5485]: E1007 12:05:54.867175    5485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302754866760577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:05:57 functional-790363 kubelet[5485]: E1007 12:05:57.648871    5485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-2hkb9" podUID="9c457b77-d3eb-43ee-b7b7-74a8d7c21e04"
	Oct 07 12:06:00 functional-790363 kubelet[5485]: E1007 12:06:00.648905    5485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="25367271-f008-4500-8e41-4f290db932a2"
	Oct 07 12:06:04 functional-790363 kubelet[5485]: E1007 12:06:04.869661    5485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302764869169028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:06:04 functional-790363 kubelet[5485]: E1007 12:06:04.869791    5485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302764869169028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:06:11 functional-790363 kubelet[5485]: E1007 12:06:11.649445    5485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-2hkb9" podUID="9c457b77-d3eb-43ee-b7b7-74a8d7c21e04"
	Oct 07 12:06:13 functional-790363 kubelet[5485]: E1007 12:06:13.650965    5485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="25367271-f008-4500-8e41-4f290db932a2"
	Oct 07 12:06:14 functional-790363 kubelet[5485]: E1007 12:06:14.695258    5485 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:06:14 functional-790363 kubelet[5485]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:06:14 functional-790363 kubelet[5485]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:06:14 functional-790363 kubelet[5485]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:06:14 functional-790363 kubelet[5485]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:06:14 functional-790363 kubelet[5485]: E1007 12:06:14.872266    5485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302774871713218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:06:14 functional-790363 kubelet[5485]: E1007 12:06:14.872313    5485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302774871713218,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:06:24 functional-790363 kubelet[5485]: E1007 12:06:24.649837    5485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-2hkb9" podUID="9c457b77-d3eb-43ee-b7b7-74a8d7c21e04"
	Oct 07 12:06:24 functional-790363 kubelet[5485]: E1007 12:06:24.874448    5485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302784873980305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:06:24 functional-790363 kubelet[5485]: E1007 12:06:24.874492    5485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302784873980305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:06:25 functional-790363 kubelet[5485]: E1007 12:06:25.648832    5485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="25367271-f008-4500-8e41-4f290db932a2"
	Oct 07 12:06:34 functional-790363 kubelet[5485]: E1007 12:06:34.877597    5485 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302794876859089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:06:34 functional-790363 kubelet[5485]: E1007 12:06:34.877702    5485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728302794876859089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:232798,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:06:40 functional-790363 kubelet[5485]: E1007 12:06:40.649692    5485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="25367271-f008-4500-8e41-4f290db932a2"
	
	
	==> kubernetes-dashboard [4b8b49ad030cff2a9431a545949d8ceef79a7eea304fd994aa22952524c0302a] <==
	2024/10/07 11:59:01 Starting overwatch
	2024/10/07 11:59:01 Using namespace: kubernetes-dashboard
	2024/10/07 11:59:01 Using in-cluster config to connect to apiserver
	2024/10/07 11:59:01 Using secret token for csrf signing
	2024/10/07 11:59:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/07 11:59:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/07 11:59:01 Successful initial request to the apiserver, version: v1.31.1
	2024/10/07 11:59:01 Generating JWE encryption key
	2024/10/07 11:59:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/07 11:59:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/07 11:59:01 Initializing JWE encryption key from synchronized object
	2024/10/07 11:59:01 Creating in-cluster Sidecar client
	2024/10/07 11:59:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 11:59:01 Serving insecurely on HTTP port: 9090
	2024/10/07 11:59:31 Successful request to sidecar
	
	
	==> storage-provisioner [123e6d68f4398bdebd16c2654551dbce1a0754fbedd91ba4e1e9a9ed7cb57458] <==
	I1007 11:55:37.519097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 11:55:37.559730       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 11:55:37.560100       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 11:55:55.000816       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 11:55:55.001247       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8efe579f-3215-4c7a-8c92-097839d8037b", APIVersion:"v1", ResourceVersion:"553", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-790363_b8ea66ad-baa3-4ee4-83a2-ee3ef6cc2645 became leader
	I1007 11:55:55.001713       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-790363_b8ea66ad-baa3-4ee4-83a2-ee3ef6cc2645!
	I1007 11:55:55.102925       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-790363_b8ea66ad-baa3-4ee4-83a2-ee3ef6cc2645!
	
	
	==> storage-provisioner [fb598880eecba2c5c8c2f41f8884e222d259ef0d90575dc6ec4d33499f7460af] <==
	I1007 11:56:19.057273       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 11:56:19.104009       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 11:56:19.104119       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 11:56:36.507628       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 11:56:36.507836       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-790363_8378ba94-1c35-463a-a3c7-623f90fbb37a!
	I1007 11:56:36.508162       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8efe579f-3215-4c7a-8c92-097839d8037b", APIVersion:"v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-790363_8378ba94-1c35-463a-a3c7-623f90fbb37a became leader
	I1007 11:56:36.608813       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-790363_8378ba94-1c35-463a-a3c7-623f90fbb37a!
	I1007 11:56:48.920700       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1007 11:56:48.921347       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"8e361877-6451-471a-9a14-0c0a728fdf66", APIVersion:"v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1007 11:56:48.920951       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    4a316bda-2edb-423a-bc91-28ef99de7d57 378 0 2024-10-07 11:54:23 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-07 11:54:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-8e361877-6451-471a-9a14-0c0a728fdf66 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  8e361877-6451-471a-9a14-0c0a728fdf66 731 0 2024-10-07 11:56:48 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-07 11:56:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-07 11:56:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1007 11:56:48.940275       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-8e361877-6451-471a-9a14-0c0a728fdf66" provisioned
	I1007 11:56:48.940388       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1007 11:56:48.940405       1 volume_store.go:212] Trying to save persistentvolume "pvc-8e361877-6451-471a-9a14-0c0a728fdf66"
	I1007 11:56:48.982259       1 volume_store.go:219] persistentvolume "pvc-8e361877-6451-471a-9a14-0c0a728fdf66" saved
	I1007 11:56:48.982548       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"8e361877-6451-471a-9a14-0c0a728fdf66", APIVersion:"v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-8e361877-6451-471a-9a14-0c0a728fdf66
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-790363 -n functional-790363
helpers_test.go:261: (dbg) Run:  kubectl --context functional-790363 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-2hkb9 sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-790363 describe pod busybox-mount mysql-6cdb49bbb-2hkb9 sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-790363 describe pod busybox-mount mysql-6cdb49bbb-2hkb9 sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-790363/192.168.39.166
	Start Time:       Mon, 07 Oct 2024 11:58:04 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://37d6481ff3fb2759693be28c18e54fa70a286b4079665dc7b31db6a006499650
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 07 Oct 2024 11:58:56 +0000
	      Finished:     Mon, 07 Oct 2024 11:58:56 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-snrw6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-snrw6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  8m40s  default-scheduler  Successfully assigned default/busybox-mount to functional-790363
	  Normal  Pulling    8m41s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     7m50s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.184s (50.999s including waiting). Image size: 4631262 bytes.
	  Normal  Created    7m49s  kubelet            Created container mount-munger
	  Normal  Started    7m49s  kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-2hkb9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-790363/192.168.39.166
	Start Time:       Mon, 07 Oct 2024 11:56:42 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h9t89 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-h9t89:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-6cdb49bbb-2hkb9 to functional-790363
	  Warning  Failed     6m10s (x2 over 7m51s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m21s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m50s (x2 over 9m32s)  kubelet            Failed to pull image "docker.io/mysql:5.7": fetching target platform image selected from image index: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m50s (x4 over 9m32s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4m26s (x7 over 9m31s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m26s (x7 over 9m31s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-790363/192.168.39.166
	Start Time:       Mon, 07 Oct 2024 11:56:50 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qhzl4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-qhzl4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m54s                  default-scheduler  Successfully assigned default/sp-pod to functional-790363
	  Warning  Failed     6m41s (x2 over 8m21s)  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m59s (x4 over 9m54s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     4m19s (x4 over 8m21s)  kubelet            Error: ErrImagePull
	  Warning  Failed     4m19s (x2 over 5m39s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    3m40s (x7 over 8m21s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     3m40s (x7 over 8m21s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (603.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 node stop m02 -v=7 --alsologtostderr
E1007 12:11:24.449143  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:42.462632  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:42.469091  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:42.480533  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:42.502055  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:42.543589  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:42.625093  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:42.786494  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:43.107996  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:43.750021  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:45.031471  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:47.593786  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:11:52.715185  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:12:02.956887  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:12:23.438469  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:13:04.399880  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-628553 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.50082501s)

                                                
                                                
-- stdout --
	* Stopping node "ha-628553-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:11:23.469215  405621 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:11:23.469479  405621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:11:23.469491  405621 out.go:358] Setting ErrFile to fd 2...
	I1007 12:11:23.469498  405621 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:11:23.469779  405621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:11:23.470174  405621 mustload.go:65] Loading cluster: ha-628553
	I1007 12:11:23.470728  405621 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:11:23.470755  405621 stop.go:39] StopHost: ha-628553-m02
	I1007 12:11:23.471332  405621 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:11:23.471399  405621 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:11:23.489110  405621 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43819
	I1007 12:11:23.489641  405621 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:11:23.490408  405621 main.go:141] libmachine: Using API Version  1
	I1007 12:11:23.490439  405621 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:11:23.490786  405621 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:11:23.492779  405621 out.go:177] * Stopping node "ha-628553-m02"  ...
	I1007 12:11:23.494141  405621 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 12:11:23.494190  405621 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:11:23.494448  405621 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 12:11:23.494474  405621 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:11:23.497276  405621 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:11:23.497752  405621 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:11:23.497772  405621 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:11:23.497940  405621 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:11:23.498121  405621 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:11:23.498300  405621 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:11:23.498500  405621 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:11:23.584332  405621 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 12:11:23.639257  405621 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 12:11:23.695026  405621 main.go:141] libmachine: Stopping "ha-628553-m02"...
	I1007 12:11:23.695081  405621 main.go:141] libmachine: (ha-628553-m02) Calling .GetState
	I1007 12:11:23.697096  405621 main.go:141] libmachine: (ha-628553-m02) Calling .Stop
	I1007 12:11:23.702103  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 0/120
	I1007 12:11:24.703679  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 1/120
	I1007 12:11:25.705442  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 2/120
	I1007 12:11:26.707418  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 3/120
	I1007 12:11:27.709545  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 4/120
	I1007 12:11:28.711922  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 5/120
	I1007 12:11:29.713564  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 6/120
	I1007 12:11:30.715837  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 7/120
	I1007 12:11:31.718087  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 8/120
	I1007 12:11:32.719718  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 9/120
	I1007 12:11:33.721683  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 10/120
	I1007 12:11:34.723228  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 11/120
	I1007 12:11:35.725632  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 12/120
	I1007 12:11:36.727227  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 13/120
	I1007 12:11:37.728633  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 14/120
	I1007 12:11:38.730855  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 15/120
	I1007 12:11:39.732410  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 16/120
	I1007 12:11:40.733875  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 17/120
	I1007 12:11:41.735592  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 18/120
	I1007 12:11:42.737157  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 19/120
	I1007 12:11:43.738684  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 20/120
	I1007 12:11:44.740929  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 21/120
	I1007 12:11:45.742428  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 22/120
	I1007 12:11:46.744196  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 23/120
	I1007 12:11:47.745815  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 24/120
	I1007 12:11:48.748029  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 25/120
	I1007 12:11:49.749608  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 26/120
	I1007 12:11:50.751031  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 27/120
	I1007 12:11:51.752898  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 28/120
	I1007 12:11:52.754542  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 29/120
	I1007 12:11:53.756714  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 30/120
	I1007 12:11:54.758259  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 31/120
	I1007 12:11:55.759871  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 32/120
	I1007 12:11:56.761990  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 33/120
	I1007 12:11:57.763404  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 34/120
	I1007 12:11:58.765322  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 35/120
	I1007 12:11:59.767270  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 36/120
	I1007 12:12:00.769690  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 37/120
	I1007 12:12:01.771338  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 38/120
	I1007 12:12:02.773395  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 39/120
	I1007 12:12:03.775901  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 40/120
	I1007 12:12:04.777905  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 41/120
	I1007 12:12:05.779302  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 42/120
	I1007 12:12:06.781740  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 43/120
	I1007 12:12:07.783157  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 44/120
	I1007 12:12:08.785462  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 45/120
	I1007 12:12:09.786859  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 46/120
	I1007 12:12:10.788323  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 47/120
	I1007 12:12:11.789822  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 48/120
	I1007 12:12:12.791215  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 49/120
	I1007 12:12:13.793859  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 50/120
	I1007 12:12:14.795374  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 51/120
	I1007 12:12:15.796916  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 52/120
	I1007 12:12:16.798512  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 53/120
	I1007 12:12:17.799905  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 54/120
	I1007 12:12:18.801854  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 55/120
	I1007 12:12:19.803533  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 56/120
	I1007 12:12:20.805716  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 57/120
	I1007 12:12:21.807149  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 58/120
	I1007 12:12:22.809566  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 59/120
	I1007 12:12:23.811934  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 60/120
	I1007 12:12:24.813505  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 61/120
	I1007 12:12:25.814917  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 62/120
	I1007 12:12:26.816618  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 63/120
	I1007 12:12:27.818944  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 64/120
	I1007 12:12:28.820948  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 65/120
	I1007 12:12:29.822470  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 66/120
	I1007 12:12:30.823959  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 67/120
	I1007 12:12:31.825532  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 68/120
	I1007 12:12:32.827046  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 69/120
	I1007 12:12:33.829344  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 70/120
	I1007 12:12:34.830691  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 71/120
	I1007 12:12:35.832410  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 72/120
	I1007 12:12:36.833771  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 73/120
	I1007 12:12:37.835377  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 74/120
	I1007 12:12:38.837500  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 75/120
	I1007 12:12:39.839045  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 76/120
	I1007 12:12:40.840439  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 77/120
	I1007 12:12:41.842196  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 78/120
	I1007 12:12:42.843518  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 79/120
	I1007 12:12:43.845715  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 80/120
	I1007 12:12:44.847183  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 81/120
	I1007 12:12:45.849530  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 82/120
	I1007 12:12:46.851629  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 83/120
	I1007 12:12:47.853706  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 84/120
	I1007 12:12:48.855897  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 85/120
	I1007 12:12:49.858364  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 86/120
	I1007 12:12:50.860073  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 87/120
	I1007 12:12:51.861680  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 88/120
	I1007 12:12:52.863547  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 89/120
	I1007 12:12:53.865825  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 90/120
	I1007 12:12:54.867369  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 91/120
	I1007 12:12:55.869553  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 92/120
	I1007 12:12:56.871359  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 93/120
	I1007 12:12:57.873084  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 94/120
	I1007 12:12:58.875148  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 95/120
	I1007 12:12:59.877813  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 96/120
	I1007 12:13:00.879164  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 97/120
	I1007 12:13:01.881484  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 98/120
	I1007 12:13:02.883083  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 99/120
	I1007 12:13:03.885049  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 100/120
	I1007 12:13:04.887113  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 101/120
	I1007 12:13:05.888681  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 102/120
	I1007 12:13:06.890163  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 103/120
	I1007 12:13:07.891695  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 104/120
	I1007 12:13:08.893928  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 105/120
	I1007 12:13:09.895419  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 106/120
	I1007 12:13:10.897097  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 107/120
	I1007 12:13:11.898613  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 108/120
	I1007 12:13:12.900360  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 109/120
	I1007 12:13:13.902673  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 110/120
	I1007 12:13:14.904501  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 111/120
	I1007 12:13:15.905970  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 112/120
	I1007 12:13:16.907517  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 113/120
	I1007 12:13:17.909197  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 114/120
	I1007 12:13:18.911641  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 115/120
	I1007 12:13:19.913635  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 116/120
	I1007 12:13:20.915366  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 117/120
	I1007 12:13:21.917505  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 118/120
	I1007 12:13:22.918900  405621 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 119/120
	I1007 12:13:23.919982  405621 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 12:13:23.920167  405621 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-628553 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr: (18.663034432s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-628553 -n ha-628553
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-628553 logs -n 25: (1.482178892s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553:/home/docker/cp-test_ha-628553-m03_ha-628553.txt                       |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553 sudo cat                                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553.txt                                 |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m02:/home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m04 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp testdata/cp-test.txt                                                | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553:/home/docker/cp-test_ha-628553-m04_ha-628553.txt                       |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553 sudo cat                                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553.txt                                 |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m02:/home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03:/home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m03 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-628553 node stop m02 -v=7                                                     | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:06:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:06:46.248953  401591 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:06:46.249102  401591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:46.249113  401591 out.go:358] Setting ErrFile to fd 2...
	I1007 12:06:46.249117  401591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:46.249326  401591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:06:46.249966  401591 out.go:352] Setting JSON to false
	I1007 12:06:46.250938  401591 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6552,"bootTime":1728296254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:06:46.251073  401591 start.go:139] virtualization: kvm guest
	I1007 12:06:46.253469  401591 out.go:177] * [ha-628553] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:06:46.255142  401591 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:06:46.255180  401591 notify.go:220] Checking for updates...
	I1007 12:06:46.257412  401591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:06:46.258630  401591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:06:46.259784  401591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.261129  401591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:06:46.262379  401591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:06:46.263655  401591 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:06:46.300943  401591 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 12:06:46.302472  401591 start.go:297] selected driver: kvm2
	I1007 12:06:46.302493  401591 start.go:901] validating driver "kvm2" against <nil>
	I1007 12:06:46.302513  401591 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:06:46.303566  401591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:06:46.303697  401591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:06:46.319358  401591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:06:46.319408  401591 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:06:46.319656  401591 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:06:46.319692  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:06:46.319741  401591 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 12:06:46.319766  401591 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 12:06:46.319825  401591 start.go:340] cluster config:
	{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 12:06:46.319936  401591 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:06:46.321805  401591 out.go:177] * Starting "ha-628553" primary control-plane node in "ha-628553" cluster
	I1007 12:06:46.323163  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:06:46.323208  401591 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:06:46.323219  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:06:46.323305  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:06:46.323316  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:06:46.323679  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:06:46.323704  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json: {Name:mk2a07965de558fa93dada604e58b87e56b9c04c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:06:46.323847  401591 start.go:360] acquireMachinesLock for ha-628553: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:06:46.323875  401591 start.go:364] duration metric: took 15.967µs to acquireMachinesLock for "ha-628553"
	I1007 12:06:46.323891  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:06:46.323965  401591 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 12:06:46.325764  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:06:46.325922  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:06:46.325971  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:06:46.341278  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I1007 12:06:46.341788  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:06:46.342304  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:06:46.342327  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:06:46.342728  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:06:46.342902  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:06:46.343093  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:06:46.343232  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:06:46.343262  401591 client.go:168] LocalClient.Create starting
	I1007 12:06:46.343300  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:06:46.343339  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:06:46.343361  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:06:46.343431  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:06:46.343449  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:06:46.343461  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:06:46.343477  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:06:46.343525  401591 main.go:141] libmachine: (ha-628553) Calling .PreCreateCheck
	I1007 12:06:46.343857  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:06:46.344200  401591 main.go:141] libmachine: Creating machine...
	I1007 12:06:46.344213  401591 main.go:141] libmachine: (ha-628553) Calling .Create
	I1007 12:06:46.344334  401591 main.go:141] libmachine: (ha-628553) Creating KVM machine...
	I1007 12:06:46.345527  401591 main.go:141] libmachine: (ha-628553) DBG | found existing default KVM network
	I1007 12:06:46.346242  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.346122  401614 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
	I1007 12:06:46.346346  401591 main.go:141] libmachine: (ha-628553) DBG | created network xml: 
	I1007 12:06:46.346370  401591 main.go:141] libmachine: (ha-628553) DBG | <network>
	I1007 12:06:46.346380  401591 main.go:141] libmachine: (ha-628553) DBG |   <name>mk-ha-628553</name>
	I1007 12:06:46.346391  401591 main.go:141] libmachine: (ha-628553) DBG |   <dns enable='no'/>
	I1007 12:06:46.346402  401591 main.go:141] libmachine: (ha-628553) DBG |   
	I1007 12:06:46.346407  401591 main.go:141] libmachine: (ha-628553) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 12:06:46.346415  401591 main.go:141] libmachine: (ha-628553) DBG |     <dhcp>
	I1007 12:06:46.346420  401591 main.go:141] libmachine: (ha-628553) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 12:06:46.346428  401591 main.go:141] libmachine: (ha-628553) DBG |     </dhcp>
	I1007 12:06:46.346439  401591 main.go:141] libmachine: (ha-628553) DBG |   </ip>
	I1007 12:06:46.346452  401591 main.go:141] libmachine: (ha-628553) DBG |   
	I1007 12:06:46.346459  401591 main.go:141] libmachine: (ha-628553) DBG | </network>
	I1007 12:06:46.346484  401591 main.go:141] libmachine: (ha-628553) DBG | 
	I1007 12:06:46.351921  401591 main.go:141] libmachine: (ha-628553) DBG | trying to create private KVM network mk-ha-628553 192.168.39.0/24...
	I1007 12:06:46.427414  401591 main.go:141] libmachine: (ha-628553) DBG | private KVM network mk-ha-628553 192.168.39.0/24 created
	I1007 12:06:46.427467  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.427375  401614 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.427482  401591 main.go:141] libmachine: (ha-628553) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 ...
	I1007 12:06:46.427511  401591 main.go:141] libmachine: (ha-628553) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:06:46.427534  401591 main.go:141] libmachine: (ha-628553) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:06:46.734984  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.734782  401614 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa...
	I1007 12:06:46.872452  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.872289  401614 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/ha-628553.rawdisk...
	I1007 12:06:46.872482  401591 main.go:141] libmachine: (ha-628553) DBG | Writing magic tar header
	I1007 12:06:46.872494  401591 main.go:141] libmachine: (ha-628553) DBG | Writing SSH key tar header
	I1007 12:06:46.872500  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.872414  401614 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 ...
	I1007 12:06:46.872528  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553
	I1007 12:06:46.872550  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 (perms=drwx------)
	I1007 12:06:46.872558  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:06:46.872571  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:06:46.872585  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.872599  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:06:46.872642  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:06:46.872667  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:06:46.872679  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:06:46.872704  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:06:46.872718  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home
	I1007 12:06:46.872731  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:06:46.872746  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:06:46.872756  401591 main.go:141] libmachine: (ha-628553) Creating domain...
	I1007 12:06:46.872770  401591 main.go:141] libmachine: (ha-628553) DBG | Skipping /home - not owner
	I1007 12:06:46.873981  401591 main.go:141] libmachine: (ha-628553) define libvirt domain using xml: 
	I1007 12:06:46.874013  401591 main.go:141] libmachine: (ha-628553) <domain type='kvm'>
	I1007 12:06:46.874020  401591 main.go:141] libmachine: (ha-628553)   <name>ha-628553</name>
	I1007 12:06:46.874024  401591 main.go:141] libmachine: (ha-628553)   <memory unit='MiB'>2200</memory>
	I1007 12:06:46.874029  401591 main.go:141] libmachine: (ha-628553)   <vcpu>2</vcpu>
	I1007 12:06:46.874033  401591 main.go:141] libmachine: (ha-628553)   <features>
	I1007 12:06:46.874038  401591 main.go:141] libmachine: (ha-628553)     <acpi/>
	I1007 12:06:46.874041  401591 main.go:141] libmachine: (ha-628553)     <apic/>
	I1007 12:06:46.874076  401591 main.go:141] libmachine: (ha-628553)     <pae/>
	I1007 12:06:46.874106  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874128  401591 main.go:141] libmachine: (ha-628553)   </features>
	I1007 12:06:46.874148  401591 main.go:141] libmachine: (ha-628553)   <cpu mode='host-passthrough'>
	I1007 12:06:46.874160  401591 main.go:141] libmachine: (ha-628553)   
	I1007 12:06:46.874169  401591 main.go:141] libmachine: (ha-628553)   </cpu>
	I1007 12:06:46.874177  401591 main.go:141] libmachine: (ha-628553)   <os>
	I1007 12:06:46.874184  401591 main.go:141] libmachine: (ha-628553)     <type>hvm</type>
	I1007 12:06:46.874189  401591 main.go:141] libmachine: (ha-628553)     <boot dev='cdrom'/>
	I1007 12:06:46.874195  401591 main.go:141] libmachine: (ha-628553)     <boot dev='hd'/>
	I1007 12:06:46.874201  401591 main.go:141] libmachine: (ha-628553)     <bootmenu enable='no'/>
	I1007 12:06:46.874209  401591 main.go:141] libmachine: (ha-628553)   </os>
	I1007 12:06:46.874217  401591 main.go:141] libmachine: (ha-628553)   <devices>
	I1007 12:06:46.874227  401591 main.go:141] libmachine: (ha-628553)     <disk type='file' device='cdrom'>
	I1007 12:06:46.874240  401591 main.go:141] libmachine: (ha-628553)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/boot2docker.iso'/>
	I1007 12:06:46.874254  401591 main.go:141] libmachine: (ha-628553)       <target dev='hdc' bus='scsi'/>
	I1007 12:06:46.874286  401591 main.go:141] libmachine: (ha-628553)       <readonly/>
	I1007 12:06:46.874302  401591 main.go:141] libmachine: (ha-628553)     </disk>
	I1007 12:06:46.874308  401591 main.go:141] libmachine: (ha-628553)     <disk type='file' device='disk'>
	I1007 12:06:46.874314  401591 main.go:141] libmachine: (ha-628553)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:06:46.874328  401591 main.go:141] libmachine: (ha-628553)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/ha-628553.rawdisk'/>
	I1007 12:06:46.874335  401591 main.go:141] libmachine: (ha-628553)       <target dev='hda' bus='virtio'/>
	I1007 12:06:46.874340  401591 main.go:141] libmachine: (ha-628553)     </disk>
	I1007 12:06:46.874346  401591 main.go:141] libmachine: (ha-628553)     <interface type='network'>
	I1007 12:06:46.874352  401591 main.go:141] libmachine: (ha-628553)       <source network='mk-ha-628553'/>
	I1007 12:06:46.874358  401591 main.go:141] libmachine: (ha-628553)       <model type='virtio'/>
	I1007 12:06:46.874363  401591 main.go:141] libmachine: (ha-628553)     </interface>
	I1007 12:06:46.874369  401591 main.go:141] libmachine: (ha-628553)     <interface type='network'>
	I1007 12:06:46.874375  401591 main.go:141] libmachine: (ha-628553)       <source network='default'/>
	I1007 12:06:46.874381  401591 main.go:141] libmachine: (ha-628553)       <model type='virtio'/>
	I1007 12:06:46.874386  401591 main.go:141] libmachine: (ha-628553)     </interface>
	I1007 12:06:46.874395  401591 main.go:141] libmachine: (ha-628553)     <serial type='pty'>
	I1007 12:06:46.874400  401591 main.go:141] libmachine: (ha-628553)       <target port='0'/>
	I1007 12:06:46.874409  401591 main.go:141] libmachine: (ha-628553)     </serial>
	I1007 12:06:46.874429  401591 main.go:141] libmachine: (ha-628553)     <console type='pty'>
	I1007 12:06:46.874446  401591 main.go:141] libmachine: (ha-628553)       <target type='serial' port='0'/>
	I1007 12:06:46.874474  401591 main.go:141] libmachine: (ha-628553)     </console>
	I1007 12:06:46.874484  401591 main.go:141] libmachine: (ha-628553)     <rng model='virtio'>
	I1007 12:06:46.874505  401591 main.go:141] libmachine: (ha-628553)       <backend model='random'>/dev/random</backend>
	I1007 12:06:46.874515  401591 main.go:141] libmachine: (ha-628553)     </rng>
	I1007 12:06:46.874526  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874539  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874559  401591 main.go:141] libmachine: (ha-628553)   </devices>
	I1007 12:06:46.874569  401591 main.go:141] libmachine: (ha-628553) </domain>
	I1007 12:06:46.874620  401591 main.go:141] libmachine: (ha-628553) 
	I1007 12:06:46.879724  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:6a:a7:e1 in network default
	I1007 12:06:46.880361  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:46.880382  401591 main.go:141] libmachine: (ha-628553) Ensuring networks are active...
	I1007 12:06:46.881257  401591 main.go:141] libmachine: (ha-628553) Ensuring network default is active
	I1007 12:06:46.881675  401591 main.go:141] libmachine: (ha-628553) Ensuring network mk-ha-628553 is active
	I1007 12:06:46.882336  401591 main.go:141] libmachine: (ha-628553) Getting domain xml...
	I1007 12:06:46.883247  401591 main.go:141] libmachine: (ha-628553) Creating domain...
	I1007 12:06:48.123283  401591 main.go:141] libmachine: (ha-628553) Waiting to get IP...
	I1007 12:06:48.124056  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.124511  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.124563  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.124510  401614 retry.go:31] will retry after 252.804778ms: waiting for machine to come up
	I1007 12:06:48.379035  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.379469  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.379489  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.379438  401614 retry.go:31] will retry after 356.807953ms: waiting for machine to come up
	I1007 12:06:48.738267  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.738722  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.738745  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.738688  401614 retry.go:31] will retry after 447.95167ms: waiting for machine to come up
	I1007 12:06:49.188519  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:49.188950  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:49.189019  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:49.188950  401614 retry.go:31] will retry after 486.200273ms: waiting for machine to come up
	I1007 12:06:49.676646  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:49.677063  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:49.677096  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:49.677017  401614 retry.go:31] will retry after 751.80427ms: waiting for machine to come up
	I1007 12:06:50.430789  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:50.431237  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:50.431260  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:50.431198  401614 retry.go:31] will retry after 897.786106ms: waiting for machine to come up
	I1007 12:06:51.330467  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:51.330831  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:51.330901  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:51.330836  401614 retry.go:31] will retry after 793.545437ms: waiting for machine to come up
	I1007 12:06:52.125725  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:52.126243  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:52.126280  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:52.126156  401614 retry.go:31] will retry after 986.036634ms: waiting for machine to come up
	I1007 12:06:53.113559  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:53.113953  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:53.113997  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:53.113901  401614 retry.go:31] will retry after 1.340335374s: waiting for machine to come up
	I1007 12:06:54.456245  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:54.456708  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:54.456732  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:54.456674  401614 retry.go:31] will retry after 1.447575739s: waiting for machine to come up
	I1007 12:06:55.906303  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:55.906806  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:55.906840  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:55.906747  401614 retry.go:31] will retry after 2.291446715s: waiting for machine to come up
	I1007 12:06:58.200323  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:58.200867  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:58.200896  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:58.200813  401614 retry.go:31] will retry after 2.450660794s: waiting for machine to come up
	I1007 12:07:00.654450  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:00.655019  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:07:00.655050  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:07:00.654943  401614 retry.go:31] will retry after 4.454613315s: waiting for machine to come up
	I1007 12:07:05.114240  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:05.114649  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:07:05.114678  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:07:05.114610  401614 retry.go:31] will retry after 4.13354174s: waiting for machine to come up
	I1007 12:07:09.251818  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.252270  401591 main.go:141] libmachine: (ha-628553) Found IP for machine: 192.168.39.110
	I1007 12:07:09.252297  401591 main.go:141] libmachine: (ha-628553) Reserving static IP address...
	I1007 12:07:09.252306  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has current primary IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.252723  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find host DHCP lease matching {name: "ha-628553", mac: "52:54:00:7b:12:fd", ip: "192.168.39.110"} in network mk-ha-628553
	I1007 12:07:09.328075  401591 main.go:141] libmachine: (ha-628553) DBG | Getting to WaitForSSH function...
	I1007 12:07:09.328108  401591 main.go:141] libmachine: (ha-628553) Reserved static IP address: 192.168.39.110
	I1007 12:07:09.328119  401591 main.go:141] libmachine: (ha-628553) Waiting for SSH to be available...
	I1007 12:07:09.330775  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.331429  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.331468  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.331645  401591 main.go:141] libmachine: (ha-628553) DBG | Using SSH client type: external
	I1007 12:07:09.331670  401591 main.go:141] libmachine: (ha-628553) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa (-rw-------)
	I1007 12:07:09.331710  401591 main.go:141] libmachine: (ha-628553) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:07:09.331724  401591 main.go:141] libmachine: (ha-628553) DBG | About to run SSH command:
	I1007 12:07:09.331736  401591 main.go:141] libmachine: (ha-628553) DBG | exit 0
	I1007 12:07:09.455242  401591 main.go:141] libmachine: (ha-628553) DBG | SSH cmd err, output: <nil>: 
	I1007 12:07:09.455632  401591 main.go:141] libmachine: (ha-628553) KVM machine creation complete!
	I1007 12:07:09.455937  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:07:09.456561  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:09.456802  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:09.457023  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:07:09.457043  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:09.458370  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:07:09.458386  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:07:09.458404  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:07:09.458413  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.460807  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.461171  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.461207  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.461300  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.461468  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.461645  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.461780  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.461919  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.462158  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.462173  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:07:09.562645  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:09.562687  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:07:09.562725  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.565649  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.565971  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.566008  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.566176  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.566388  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.566561  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.566676  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.566830  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.567082  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.567099  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:07:09.667847  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:07:09.667941  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:07:09.667948  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:07:09.667957  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.668229  401591 buildroot.go:166] provisioning hostname "ha-628553"
	I1007 12:07:09.668263  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.668471  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.671034  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.671389  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.671427  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.671579  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.671743  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.671923  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.672060  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.672217  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.672404  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.672417  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553 && echo "ha-628553" | sudo tee /etc/hostname
	I1007 12:07:09.786631  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553
	
	I1007 12:07:09.786665  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.789427  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.789744  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.789774  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.789989  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.790273  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.790426  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.790549  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.790707  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.790919  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.790942  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:07:09.900194  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:09.900232  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:07:09.900296  401591 buildroot.go:174] setting up certificates
	I1007 12:07:09.900321  401591 provision.go:84] configureAuth start
	I1007 12:07:09.900343  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.900684  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:09.903579  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.904022  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.904048  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.904222  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.906311  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.906630  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.906658  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.906830  401591 provision.go:143] copyHostCerts
	I1007 12:07:09.906874  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:09.906920  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:07:09.906937  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:09.907109  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:07:09.907203  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:09.907224  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:07:09.907232  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:09.907258  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:07:09.907319  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:09.907341  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:07:09.907348  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:09.907368  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:07:09.907427  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553 san=[127.0.0.1 192.168.39.110 ha-628553 localhost minikube]
	I1007 12:07:09.982701  401591 provision.go:177] copyRemoteCerts
	I1007 12:07:09.982771  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:07:09.982796  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.985547  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.985859  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.985888  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.986044  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.986244  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.986399  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.986506  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.070065  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:07:10.070156  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:07:10.096714  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:07:10.096790  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 12:07:10.123505  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:07:10.123591  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:07:10.149487  401591 provision.go:87] duration metric: took 249.146606ms to configureAuth
	I1007 12:07:10.149524  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:07:10.149723  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:10.149836  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.152585  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.152880  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.152910  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.153069  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.153241  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.153400  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.153553  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.153691  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:10.153888  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:10.153903  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:07:10.373356  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:07:10.373398  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:07:10.373429  401591 main.go:141] libmachine: (ha-628553) Calling .GetURL
	I1007 12:07:10.374673  401591 main.go:141] libmachine: (ha-628553) DBG | Using libvirt version 6000000
	I1007 12:07:10.376989  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.377347  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.377371  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.377519  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:07:10.377531  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:07:10.377548  401591 client.go:171] duration metric: took 24.034266127s to LocalClient.Create
	I1007 12:07:10.377571  401591 start.go:167] duration metric: took 24.034341329s to libmachine.API.Create "ha-628553"
	I1007 12:07:10.377581  401591 start.go:293] postStartSetup for "ha-628553" (driver="kvm2")
	I1007 12:07:10.377593  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:07:10.377610  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.377871  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:07:10.377899  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.380000  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.380320  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.380343  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.380475  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.380648  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.380799  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.380960  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.461919  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:07:10.466913  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:07:10.466951  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:07:10.467055  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:07:10.467179  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:07:10.467195  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:07:10.467315  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:07:10.478269  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:10.503960  401591 start.go:296] duration metric: took 126.358927ms for postStartSetup
	I1007 12:07:10.504030  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:07:10.504699  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:10.507315  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.507612  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.507660  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.507956  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:10.508187  401591 start.go:128] duration metric: took 24.184210305s to createHost
	I1007 12:07:10.508226  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.510480  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.510789  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.510822  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.511033  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.511256  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.511415  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.511573  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.511733  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:10.511905  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:10.511924  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:07:10.611827  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302830.585700119
	
	I1007 12:07:10.611860  401591 fix.go:216] guest clock: 1728302830.585700119
	I1007 12:07:10.611870  401591 fix.go:229] Guest: 2024-10-07 12:07:10.585700119 +0000 UTC Remote: 2024-10-07 12:07:10.508202357 +0000 UTC m=+24.300236101 (delta=77.497762ms)
	I1007 12:07:10.611911  401591 fix.go:200] guest clock delta is within tolerance: 77.497762ms
	I1007 12:07:10.611917  401591 start.go:83] releasing machines lock for "ha-628553", held for 24.288033555s
	I1007 12:07:10.611944  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.612216  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:10.614566  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.614868  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.614895  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.615083  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.615721  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.615950  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.616059  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:07:10.616101  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.616157  401591 ssh_runner.go:195] Run: cat /version.json
	I1007 12:07:10.616184  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.618780  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.618978  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619174  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.619193  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619348  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.619390  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619659  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.619672  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.619840  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.619847  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.620016  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.620024  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.620177  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.620181  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.718502  401591 ssh_runner.go:195] Run: systemctl --version
	I1007 12:07:10.724799  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:07:10.886272  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:07:10.893483  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:07:10.893578  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:07:10.909850  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:07:10.909880  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:07:10.909961  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:07:10.926247  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:07:10.941251  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:07:10.941339  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:07:10.955771  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:07:10.969831  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:07:11.084350  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:07:11.233191  401591 docker.go:233] disabling docker service ...
	I1007 12:07:11.233261  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:07:11.257607  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:07:11.272121  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:07:11.404315  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:07:11.544026  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:07:11.559395  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:07:11.580516  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:07:11.580580  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.592830  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:07:11.592905  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.604197  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.615375  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.626652  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:07:11.638161  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.649289  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.668010  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.679654  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:07:11.690371  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:07:11.690448  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:07:11.704718  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:07:11.715762  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:11.825411  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:07:11.918378  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:07:11.918470  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:07:11.923527  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:07:11.923612  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:07:11.927764  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:07:11.977811  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:07:11.977922  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:07:12.007918  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:07:12.039043  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:07:12.040655  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:12.043258  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:12.043618  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:12.043660  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:12.043867  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:07:12.048464  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:07:12.062293  401591 kubeadm.go:883] updating cluster {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:07:12.062486  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:07:12.062597  401591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:07:12.097470  401591 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:07:12.097555  401591 ssh_runner.go:195] Run: which lz4
	I1007 12:07:12.101992  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 12:07:12.102107  401591 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:07:12.106769  401591 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:07:12.106815  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:07:13.549777  401591 crio.go:462] duration metric: took 1.447693523s to copy over tarball
	I1007 12:07:13.549867  401591 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:07:15.620966  401591 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.071058726s)
	I1007 12:07:15.621003  401591 crio.go:469] duration metric: took 2.071194203s to extract the tarball
	I1007 12:07:15.621015  401591 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:07:15.659036  401591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:07:15.704438  401591 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:07:15.704468  401591 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:07:15.704477  401591 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.31.1 crio true true} ...
	I1007 12:07:15.704607  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:07:15.704694  401591 ssh_runner.go:195] Run: crio config
	I1007 12:07:15.754734  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:07:15.754757  401591 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:07:15.754770  401591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:07:15.754796  401591 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-628553 NodeName:ha-628553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:07:15.754985  401591 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-628553"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:07:15.755023  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:07:15.755081  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:07:15.772386  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:07:15.772511  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:07:15.772569  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:07:15.783117  401591 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:07:15.783206  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:07:15.793430  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:07:15.811520  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:07:15.829402  401591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:07:15.846802  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 12:07:15.864215  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:07:15.868441  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:07:15.881667  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:16.004989  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:07:16.023767  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.110
	I1007 12:07:16.023798  401591 certs.go:194] generating shared ca certs ...
	I1007 12:07:16.023817  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.023995  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:07:16.024043  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:07:16.024055  401591 certs.go:256] generating profile certs ...
	I1007 12:07:16.024128  401591 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:07:16.024144  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt with IP's: []
	I1007 12:07:16.480073  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt ...
	I1007 12:07:16.480107  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt: {Name:mkfb027cfd899ceeb19712c80d47ef46bbe4c190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.480291  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key ...
	I1007 12:07:16.480303  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key: {Name:mk472c4daf268a3e203f7108e0ee108260fa3747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.480379  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105
	I1007 12:07:16.480394  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I1007 12:07:16.560831  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 ...
	I1007 12:07:16.560865  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105: {Name:mkda56599207690099e4c299c085dc0644ef658a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.561026  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105 ...
	I1007 12:07:16.561038  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105: {Name:mk95b3f2a966eb67f31cfddf5b506b130fe9bd62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.561111  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:07:16.561219  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:07:16.561278  401591 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:07:16.561293  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt with IP's: []
	I1007 12:07:16.724627  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt ...
	I1007 12:07:16.724663  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt: {Name:mka4b333091a10b550ae6d13ed243d08adf6256b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.724831  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key ...
	I1007 12:07:16.724852  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key: {Name:mk6b2bcdf33ba7c4b6b9286fdc19a9d76a966caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.724932  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:07:16.724949  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:07:16.724963  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:07:16.724977  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:07:16.724990  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:07:16.725004  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:07:16.725016  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:07:16.725028  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:07:16.725075  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:07:16.725108  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:07:16.725118  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:07:16.725153  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:07:16.725179  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:07:16.725216  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:07:16.725253  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:16.725329  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:07:16.725350  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:07:16.725362  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:16.726018  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:07:16.753427  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:07:16.781404  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:07:16.817294  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:07:16.847559  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:07:16.873440  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:07:16.900479  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:07:16.927096  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:07:16.955843  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:07:16.983339  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:07:17.013360  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:07:17.041294  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:07:17.061373  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:07:17.067955  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:07:17.081953  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.087146  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.087222  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.094009  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:07:17.108332  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:07:17.122877  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.128622  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.128708  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.136010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:07:17.150544  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:07:17.165028  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.170897  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.170982  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.177949  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:07:17.192554  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:07:17.197582  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:07:17.197639  401591 kubeadm.go:392] StartCluster: {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:07:17.197720  401591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:07:17.197783  401591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:07:17.244966  401591 cri.go:89] found id: ""
	I1007 12:07:17.245041  401591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:07:17.257993  401591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:07:17.270516  401591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:07:17.282873  401591 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:07:17.282897  401591 kubeadm.go:157] found existing configuration files:
	
	I1007 12:07:17.282953  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:07:17.293921  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:07:17.294014  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:07:17.305489  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:07:17.315800  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:07:17.315863  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:07:17.326391  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:07:17.336609  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:07:17.336691  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:07:17.347761  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:07:17.358288  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:07:17.358369  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:07:17.369688  401591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 12:07:17.494169  401591 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 12:07:17.494284  401591 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 12:07:17.626708  401591 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 12:07:17.626813  401591 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 12:07:17.626906  401591 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 12:07:17.639261  401591 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 12:07:17.853154  401591 out.go:235]   - Generating certificates and keys ...
	I1007 12:07:17.853313  401591 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 12:07:17.853396  401591 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 12:07:17.853510  401591 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 12:07:17.853594  401591 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 12:07:18.070639  401591 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 12:07:18.133955  401591 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 12:07:18.493727  401591 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 12:07:18.493854  401591 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-628553 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1007 12:07:18.624521  401591 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 12:07:18.624725  401591 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-628553 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1007 12:07:18.772457  401591 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 12:07:19.133450  401591 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 12:07:19.279063  401591 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 12:07:19.279188  401591 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 12:07:19.348410  401591 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 12:07:19.574804  401591 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 12:07:19.645430  401591 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 12:07:19.894630  401591 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 12:07:20.065666  401591 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 12:07:20.066298  401591 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 12:07:20.071555  401591 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 12:07:20.073562  401591 out.go:235]   - Booting up control plane ...
	I1007 12:07:20.073670  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 12:07:20.073742  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 12:07:20.073803  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 12:07:20.089334  401591 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 12:07:20.096504  401591 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 12:07:20.096582  401591 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 12:07:20.238757  401591 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 12:07:20.238922  401591 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 12:07:21.247383  401591 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.007919898s
	I1007 12:07:21.247485  401591 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 12:07:26.913696  401591 kubeadm.go:310] [api-check] The API server is healthy after 5.671139192s
	I1007 12:07:26.932589  401591 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 12:07:26.948791  401591 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 12:07:27.494371  401591 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 12:07:27.494637  401591 kubeadm.go:310] [mark-control-plane] Marking the node ha-628553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 12:07:27.512639  401591 kubeadm.go:310] [bootstrap-token] Using token: jd5sg7.ynaw0s6f9h2yr29w
	I1007 12:07:27.514508  401591 out.go:235]   - Configuring RBAC rules ...
	I1007 12:07:27.514678  401591 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 12:07:27.527273  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 12:07:27.537651  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 12:07:27.542026  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 12:07:27.545879  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 12:07:27.550174  401591 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 12:07:27.568355  401591 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 12:07:27.807712  401591 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 12:07:28.321610  401591 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 12:07:28.321657  401591 kubeadm.go:310] 
	I1007 12:07:28.321720  401591 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 12:07:28.321728  401591 kubeadm.go:310] 
	I1007 12:07:28.321852  401591 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 12:07:28.321870  401591 kubeadm.go:310] 
	I1007 12:07:28.321904  401591 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 12:07:28.321987  401591 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 12:07:28.322064  401591 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 12:07:28.322074  401591 kubeadm.go:310] 
	I1007 12:07:28.322155  401591 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 12:07:28.322171  401591 kubeadm.go:310] 
	I1007 12:07:28.322225  401591 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 12:07:28.322234  401591 kubeadm.go:310] 
	I1007 12:07:28.322293  401591 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 12:07:28.322386  401591 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 12:07:28.322471  401591 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 12:07:28.322481  401591 kubeadm.go:310] 
	I1007 12:07:28.322608  401591 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 12:07:28.322677  401591 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 12:07:28.322684  401591 kubeadm.go:310] 
	I1007 12:07:28.322753  401591 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jd5sg7.ynaw0s6f9h2yr29w \
	I1007 12:07:28.322898  401591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 \
	I1007 12:07:28.322931  401591 kubeadm.go:310] 	--control-plane 
	I1007 12:07:28.322941  401591 kubeadm.go:310] 
	I1007 12:07:28.323057  401591 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 12:07:28.323067  401591 kubeadm.go:310] 
	I1007 12:07:28.323165  401591 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jd5sg7.ynaw0s6f9h2yr29w \
	I1007 12:07:28.323318  401591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 
	I1007 12:07:28.324193  401591 kubeadm.go:310] W1007 12:07:17.473376     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:07:28.324456  401591 kubeadm.go:310] W1007 12:07:17.474417     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:07:28.324568  401591 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:07:28.324604  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:07:28.324616  401591 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:07:28.326463  401591 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 12:07:28.327680  401591 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 12:07:28.333563  401591 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 12:07:28.333587  401591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 12:07:28.357058  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 12:07:28.763710  401591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:07:28.763800  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:28.763837  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553 minikube.k8s.io/updated_at=2024_10_07T12_07_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=true
	I1007 12:07:28.789823  401591 ops.go:34] apiserver oom_adj: -16
	I1007 12:07:28.939139  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:29.440288  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:29.939479  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:30.440099  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:30.940243  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:31.439830  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:31.939544  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:32.439274  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:32.691661  401591 kubeadm.go:1113] duration metric: took 3.927936335s to wait for elevateKubeSystemPrivileges
	I1007 12:07:32.691702  401591 kubeadm.go:394] duration metric: took 15.494065691s to StartCluster
	I1007 12:07:32.691720  401591 settings.go:142] acquiring lock: {Name:mk1ff033f29b570679652ae5ee30e0799b0658dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:32.691805  401591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:07:32.694409  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:32.695052  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 12:07:32.695056  401591 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:07:32.695093  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:07:32.695116  401591 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:07:32.695224  401591 addons.go:69] Setting storage-provisioner=true in profile "ha-628553"
	I1007 12:07:32.695233  401591 addons.go:69] Setting default-storageclass=true in profile "ha-628553"
	I1007 12:07:32.695246  401591 addons.go:234] Setting addon storage-provisioner=true in "ha-628553"
	I1007 12:07:32.695276  401591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-628553"
	I1007 12:07:32.695321  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:32.695278  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:07:32.695828  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.695856  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.695880  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.695904  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.713283  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I1007 12:07:32.713330  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I1007 12:07:32.713795  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.713821  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.714372  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.714404  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.714470  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.714495  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.714860  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.714918  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.715087  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.715613  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.715671  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.717649  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:07:32.717950  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:07:32.718459  401591 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 12:07:32.718801  401591 addons.go:234] Setting addon default-storageclass=true in "ha-628553"
	I1007 12:07:32.718846  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:07:32.719253  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.719305  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.733464  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45313
	I1007 12:07:32.734011  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.734570  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.734597  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.734946  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.735147  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.736496  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I1007 12:07:32.736815  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:32.737247  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.737699  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.737724  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.738090  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.738558  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.738606  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.739129  401591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:07:32.740633  401591 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:07:32.740659  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:07:32.740683  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:32.744392  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.744885  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:32.744914  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.745085  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:32.745311  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:32.745493  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:32.745635  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:32.755450  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I1007 12:07:32.756180  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.756775  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.756839  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.757215  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.757439  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.759112  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:32.759361  401591 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:07:32.759380  401591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:07:32.759399  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:32.761925  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.762241  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:32.762266  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.762381  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:32.762573  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:32.762681  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:32.762803  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:32.893511  401591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:07:32.927665  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 12:07:32.930086  401591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:07:33.749725  401591 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 12:07:33.749834  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.749857  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750070  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750085  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750150  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.750183  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750217  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750228  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750239  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750364  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750400  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750412  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750420  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750560  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.750625  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750637  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750639  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750662  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750758  401591 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:07:33.750779  401591 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:07:33.750910  401591 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 12:07:33.750920  401591 round_trippers.go:469] Request Headers:
	I1007 12:07:33.750933  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:07:33.750938  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:07:33.762601  401591 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:07:33.763351  401591 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 12:07:33.763370  401591 round_trippers.go:469] Request Headers:
	I1007 12:07:33.763378  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:07:33.763383  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:07:33.763386  401591 round_trippers.go:473]     Content-Type: application/json
	I1007 12:07:33.766118  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:07:33.766300  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.766313  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.766629  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.766646  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.766684  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.768511  401591 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 12:07:33.770162  401591 addons.go:510] duration metric: took 1.075047661s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 12:07:33.770212  401591 start.go:246] waiting for cluster config update ...
	I1007 12:07:33.770227  401591 start.go:255] writing updated cluster config ...
	I1007 12:07:33.772026  401591 out.go:201] 
	I1007 12:07:33.773570  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:33.773647  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:33.775167  401591 out.go:177] * Starting "ha-628553-m02" control-plane node in "ha-628553" cluster
	I1007 12:07:33.776386  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:07:33.776419  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:07:33.776564  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:07:33.776577  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:07:33.776670  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:33.776889  401591 start.go:360] acquireMachinesLock for ha-628553-m02: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:07:33.776949  401591 start.go:364] duration metric: took 33.552µs to acquireMachinesLock for "ha-628553-m02"
	I1007 12:07:33.776978  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:07:33.777088  401591 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 12:07:33.779624  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:07:33.779742  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:33.779791  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:33.795004  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I1007 12:07:33.795415  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:33.795909  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:33.795931  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:33.796264  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:33.796498  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:33.796628  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:33.796770  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:07:33.796805  401591 client.go:168] LocalClient.Create starting
	I1007 12:07:33.796847  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:07:33.796894  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:07:33.796911  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:07:33.796968  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:07:33.796987  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:07:33.796997  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:07:33.797015  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:07:33.797023  401591 main.go:141] libmachine: (ha-628553-m02) Calling .PreCreateCheck
	I1007 12:07:33.797222  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:33.797700  401591 main.go:141] libmachine: Creating machine...
	I1007 12:07:33.797714  401591 main.go:141] libmachine: (ha-628553-m02) Calling .Create
	I1007 12:07:33.797891  401591 main.go:141] libmachine: (ha-628553-m02) Creating KVM machine...
	I1007 12:07:33.799094  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found existing default KVM network
	I1007 12:07:33.799243  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found existing private KVM network mk-ha-628553
	I1007 12:07:33.799364  401591 main.go:141] libmachine: (ha-628553-m02) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 ...
	I1007 12:07:33.799377  401591 main.go:141] libmachine: (ha-628553-m02) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:07:33.799477  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:33.799367  401944 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:07:33.799603  401591 main.go:141] libmachine: (ha-628553-m02) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:07:34.069404  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.069235  401944 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa...
	I1007 12:07:34.176325  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.176157  401944 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/ha-628553-m02.rawdisk...
	I1007 12:07:34.176359  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Writing magic tar header
	I1007 12:07:34.176372  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Writing SSH key tar header
	I1007 12:07:34.176384  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.176303  401944 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 ...
	I1007 12:07:34.176398  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02
	I1007 12:07:34.176501  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:07:34.176544  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:07:34.176555  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 (perms=drwx------)
	I1007 12:07:34.176567  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:07:34.176576  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:07:34.176583  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:07:34.176594  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:07:34.176609  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:07:34.176622  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:07:34.176635  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:07:34.176651  401591 main.go:141] libmachine: (ha-628553-m02) Creating domain...
	I1007 12:07:34.176660  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:07:34.176668  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home
	I1007 12:07:34.176675  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Skipping /home - not owner
	I1007 12:07:34.177701  401591 main.go:141] libmachine: (ha-628553-m02) define libvirt domain using xml: 
	I1007 12:07:34.177730  401591 main.go:141] libmachine: (ha-628553-m02) <domain type='kvm'>
	I1007 12:07:34.177740  401591 main.go:141] libmachine: (ha-628553-m02)   <name>ha-628553-m02</name>
	I1007 12:07:34.177751  401591 main.go:141] libmachine: (ha-628553-m02)   <memory unit='MiB'>2200</memory>
	I1007 12:07:34.177759  401591 main.go:141] libmachine: (ha-628553-m02)   <vcpu>2</vcpu>
	I1007 12:07:34.177766  401591 main.go:141] libmachine: (ha-628553-m02)   <features>
	I1007 12:07:34.177777  401591 main.go:141] libmachine: (ha-628553-m02)     <acpi/>
	I1007 12:07:34.177786  401591 main.go:141] libmachine: (ha-628553-m02)     <apic/>
	I1007 12:07:34.177796  401591 main.go:141] libmachine: (ha-628553-m02)     <pae/>
	I1007 12:07:34.177809  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.177820  401591 main.go:141] libmachine: (ha-628553-m02)   </features>
	I1007 12:07:34.177834  401591 main.go:141] libmachine: (ha-628553-m02)   <cpu mode='host-passthrough'>
	I1007 12:07:34.177844  401591 main.go:141] libmachine: (ha-628553-m02)   
	I1007 12:07:34.177853  401591 main.go:141] libmachine: (ha-628553-m02)   </cpu>
	I1007 12:07:34.177864  401591 main.go:141] libmachine: (ha-628553-m02)   <os>
	I1007 12:07:34.177870  401591 main.go:141] libmachine: (ha-628553-m02)     <type>hvm</type>
	I1007 12:07:34.177876  401591 main.go:141] libmachine: (ha-628553-m02)     <boot dev='cdrom'/>
	I1007 12:07:34.177883  401591 main.go:141] libmachine: (ha-628553-m02)     <boot dev='hd'/>
	I1007 12:07:34.177888  401591 main.go:141] libmachine: (ha-628553-m02)     <bootmenu enable='no'/>
	I1007 12:07:34.177895  401591 main.go:141] libmachine: (ha-628553-m02)   </os>
	I1007 12:07:34.177900  401591 main.go:141] libmachine: (ha-628553-m02)   <devices>
	I1007 12:07:34.177910  401591 main.go:141] libmachine: (ha-628553-m02)     <disk type='file' device='cdrom'>
	I1007 12:07:34.177952  401591 main.go:141] libmachine: (ha-628553-m02)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/boot2docker.iso'/>
	I1007 12:07:34.177981  401591 main.go:141] libmachine: (ha-628553-m02)       <target dev='hdc' bus='scsi'/>
	I1007 12:07:34.177992  401591 main.go:141] libmachine: (ha-628553-m02)       <readonly/>
	I1007 12:07:34.178002  401591 main.go:141] libmachine: (ha-628553-m02)     </disk>
	I1007 12:07:34.178015  401591 main.go:141] libmachine: (ha-628553-m02)     <disk type='file' device='disk'>
	I1007 12:07:34.178028  401591 main.go:141] libmachine: (ha-628553-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:07:34.178044  401591 main.go:141] libmachine: (ha-628553-m02)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/ha-628553-m02.rawdisk'/>
	I1007 12:07:34.178055  401591 main.go:141] libmachine: (ha-628553-m02)       <target dev='hda' bus='virtio'/>
	I1007 12:07:34.178066  401591 main.go:141] libmachine: (ha-628553-m02)     </disk>
	I1007 12:07:34.178073  401591 main.go:141] libmachine: (ha-628553-m02)     <interface type='network'>
	I1007 12:07:34.178085  401591 main.go:141] libmachine: (ha-628553-m02)       <source network='mk-ha-628553'/>
	I1007 12:07:34.178102  401591 main.go:141] libmachine: (ha-628553-m02)       <model type='virtio'/>
	I1007 12:07:34.178114  401591 main.go:141] libmachine: (ha-628553-m02)     </interface>
	I1007 12:07:34.178125  401591 main.go:141] libmachine: (ha-628553-m02)     <interface type='network'>
	I1007 12:07:34.178138  401591 main.go:141] libmachine: (ha-628553-m02)       <source network='default'/>
	I1007 12:07:34.178148  401591 main.go:141] libmachine: (ha-628553-m02)       <model type='virtio'/>
	I1007 12:07:34.178157  401591 main.go:141] libmachine: (ha-628553-m02)     </interface>
	I1007 12:07:34.178172  401591 main.go:141] libmachine: (ha-628553-m02)     <serial type='pty'>
	I1007 12:07:34.178184  401591 main.go:141] libmachine: (ha-628553-m02)       <target port='0'/>
	I1007 12:07:34.178191  401591 main.go:141] libmachine: (ha-628553-m02)     </serial>
	I1007 12:07:34.178201  401591 main.go:141] libmachine: (ha-628553-m02)     <console type='pty'>
	I1007 12:07:34.178212  401591 main.go:141] libmachine: (ha-628553-m02)       <target type='serial' port='0'/>
	I1007 12:07:34.178223  401591 main.go:141] libmachine: (ha-628553-m02)     </console>
	I1007 12:07:34.178233  401591 main.go:141] libmachine: (ha-628553-m02)     <rng model='virtio'>
	I1007 12:07:34.178266  401591 main.go:141] libmachine: (ha-628553-m02)       <backend model='random'>/dev/random</backend>
	I1007 12:07:34.178292  401591 main.go:141] libmachine: (ha-628553-m02)     </rng>
	I1007 12:07:34.178303  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.178316  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.178324  401591 main.go:141] libmachine: (ha-628553-m02)   </devices>
	I1007 12:07:34.178331  401591 main.go:141] libmachine: (ha-628553-m02) </domain>
	I1007 12:07:34.178342  401591 main.go:141] libmachine: (ha-628553-m02) 
	I1007 12:07:34.185967  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:33:2a:81 in network default
	I1007 12:07:34.186520  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring networks are active...
	I1007 12:07:34.186550  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:34.187255  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring network default is active
	I1007 12:07:34.187562  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring network mk-ha-628553 is active
	I1007 12:07:34.187923  401591 main.go:141] libmachine: (ha-628553-m02) Getting domain xml...
	I1007 12:07:34.188741  401591 main.go:141] libmachine: (ha-628553-m02) Creating domain...
	I1007 12:07:35.460306  401591 main.go:141] libmachine: (ha-628553-m02) Waiting to get IP...
	I1007 12:07:35.461270  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.461715  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.461750  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.461693  401944 retry.go:31] will retry after 211.598538ms: waiting for machine to come up
	I1007 12:07:35.675347  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.675895  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.675927  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.675805  401944 retry.go:31] will retry after 296.849ms: waiting for machine to come up
	I1007 12:07:35.974395  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.974893  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.974954  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.974854  401944 retry.go:31] will retry after 388.404149ms: waiting for machine to come up
	I1007 12:07:36.365448  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:36.366155  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:36.366184  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:36.366075  401944 retry.go:31] will retry after 534.318698ms: waiting for machine to come up
	I1007 12:07:36.901907  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:36.902475  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:36.902512  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:36.902413  401944 retry.go:31] will retry after 649.263788ms: waiting for machine to come up
	I1007 12:07:37.553345  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:37.553872  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:37.553898  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:37.553792  401944 retry.go:31] will retry after 939.159086ms: waiting for machine to come up
	I1007 12:07:38.495133  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:38.495757  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:38.495785  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:38.495703  401944 retry.go:31] will retry after 913.128072ms: waiting for machine to come up
	I1007 12:07:39.410208  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:39.410778  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:39.410847  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:39.410734  401944 retry.go:31] will retry after 1.275296837s: waiting for machine to come up
	I1007 12:07:40.688215  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:40.688737  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:40.688763  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:40.688692  401944 retry.go:31] will retry after 1.706568868s: waiting for machine to come up
	I1007 12:07:42.397331  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:42.398210  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:42.398242  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:42.398140  401944 retry.go:31] will retry after 2.035219193s: waiting for machine to come up
	I1007 12:07:44.435063  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:44.435558  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:44.435604  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:44.435541  401944 retry.go:31] will retry after 2.129313504s: waiting for machine to come up
	I1007 12:07:46.567866  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:46.568337  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:46.568363  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:46.568294  401944 retry.go:31] will retry after 2.900138556s: waiting for machine to come up
	I1007 12:07:49.470446  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:49.470835  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:49.470861  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:49.470787  401944 retry.go:31] will retry after 2.802723119s: waiting for machine to come up
	I1007 12:07:52.276755  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:52.277120  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:52.277151  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:52.277100  401944 retry.go:31] will retry after 4.815030442s: waiting for machine to come up
	I1007 12:07:57.095944  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.096384  401591 main.go:141] libmachine: (ha-628553-m02) Found IP for machine: 192.168.39.169
	I1007 12:07:57.096411  401591 main.go:141] libmachine: (ha-628553-m02) Reserving static IP address...
	I1007 12:07:57.096424  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has current primary IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.096805  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find host DHCP lease matching {name: "ha-628553-m02", mac: "52:54:00:59:4a:2e", ip: "192.168.39.169"} in network mk-ha-628553
	I1007 12:07:57.173671  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Getting to WaitForSSH function...
	I1007 12:07:57.173707  401591 main.go:141] libmachine: (ha-628553-m02) Reserved static IP address: 192.168.39.169
	I1007 12:07:57.173721  401591 main.go:141] libmachine: (ha-628553-m02) Waiting for SSH to be available...
	I1007 12:07:57.176077  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.176414  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.176448  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.176591  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH client type: external
	I1007 12:07:57.176618  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa (-rw-------)
	I1007 12:07:57.176654  401591 main.go:141] libmachine: (ha-628553-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:07:57.176671  401591 main.go:141] libmachine: (ha-628553-m02) DBG | About to run SSH command:
	I1007 12:07:57.176683  401591 main.go:141] libmachine: (ha-628553-m02) DBG | exit 0
	I1007 12:07:57.299343  401591 main.go:141] libmachine: (ha-628553-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 12:07:57.299606  401591 main.go:141] libmachine: (ha-628553-m02) KVM machine creation complete!
	I1007 12:07:57.299951  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:57.300520  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:57.300733  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:57.300899  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:07:57.300909  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetState
	I1007 12:07:57.302247  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:07:57.302263  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:07:57.302270  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:07:57.302277  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.304689  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.305046  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.305083  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.305220  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.305416  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.305566  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.305687  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.305859  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.306075  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.306087  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:07:57.402628  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:57.402652  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:07:57.402660  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.405841  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.406213  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.406245  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.406443  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.406658  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.406871  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.407020  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.407143  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.407310  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.407320  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:07:57.503882  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:07:57.503964  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:07:57.503972  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:07:57.503980  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.504231  401591 buildroot.go:166] provisioning hostname "ha-628553-m02"
	I1007 12:07:57.504259  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.504487  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.507249  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.507577  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.507606  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.507742  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.507923  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.508054  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.508176  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.508480  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.508681  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.508694  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m02 && echo "ha-628553-m02" | sudo tee /etc/hostname
	I1007 12:07:57.622198  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m02
	
	I1007 12:07:57.622239  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.625084  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.625439  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.625478  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.625644  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.625837  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.626007  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.626130  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.626308  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.626503  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.626525  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:07:57.732566  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:57.732598  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:07:57.732622  401591 buildroot.go:174] setting up certificates
	I1007 12:07:57.732636  401591 provision.go:84] configureAuth start
	I1007 12:07:57.732649  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.732948  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:57.735493  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.735786  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.735817  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.735963  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.737975  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.738293  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.738318  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.738455  401591 provision.go:143] copyHostCerts
	I1007 12:07:57.738486  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:57.738525  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:07:57.738541  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:57.738610  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:07:57.738684  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:57.738703  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:07:57.738710  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:57.738733  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:07:57.738777  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:57.738793  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:07:57.738800  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:57.738820  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:07:57.738866  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m02 san=[127.0.0.1 192.168.39.169 ha-628553-m02 localhost minikube]
	I1007 12:07:58.143814  401591 provision.go:177] copyRemoteCerts
	I1007 12:07:58.143882  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:07:58.143910  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.147250  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.147700  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.147742  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.147869  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.148081  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.148224  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.148327  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.230179  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:07:58.230271  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:07:58.258288  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:07:58.258382  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:07:58.285135  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:07:58.285208  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:07:58.312621  401591 provision.go:87] duration metric: took 579.970325ms to configureAuth
	I1007 12:07:58.312652  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:07:58.312828  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:58.312907  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.315586  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.315959  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.315990  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.316222  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.316422  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.316601  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.316743  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.316927  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:58.317142  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:58.317161  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:07:58.545249  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:07:58.545278  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:07:58.545290  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetURL
	I1007 12:07:58.546702  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using libvirt version 6000000
	I1007 12:07:58.548842  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.549284  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.549317  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.549407  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:07:58.549418  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:07:58.549424  401591 client.go:171] duration metric: took 24.752608877s to LocalClient.Create
	I1007 12:07:58.549459  401591 start.go:167] duration metric: took 24.752691243s to libmachine.API.Create "ha-628553"
	I1007 12:07:58.549474  401591 start.go:293] postStartSetup for "ha-628553-m02" (driver="kvm2")
	I1007 12:07:58.549489  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:07:58.549507  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.549760  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:07:58.549786  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.551787  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.552071  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.552105  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.552239  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.552437  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.552667  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.552832  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.629949  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:07:58.634600  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:07:58.634633  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:07:58.634716  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:07:58.634820  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:07:58.634833  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:07:58.634948  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:07:58.644927  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:58.670613  401591 start.go:296] duration metric: took 121.120015ms for postStartSetup
	I1007 12:07:58.670687  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:58.671316  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:58.673738  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.674117  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.674143  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.674429  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:58.674687  401591 start.go:128] duration metric: took 24.897586771s to createHost
	I1007 12:07:58.674717  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.676881  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.677232  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.677261  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.677369  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.677545  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.677717  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.677844  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.677997  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:58.678177  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:58.678188  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:07:58.776120  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302878.748851389
	
	I1007 12:07:58.776147  401591 fix.go:216] guest clock: 1728302878.748851389
	I1007 12:07:58.776158  401591 fix.go:229] Guest: 2024-10-07 12:07:58.748851389 +0000 UTC Remote: 2024-10-07 12:07:58.674704612 +0000 UTC m=+72.466738357 (delta=74.146777ms)
	I1007 12:07:58.776181  401591 fix.go:200] guest clock delta is within tolerance: 74.146777ms
	I1007 12:07:58.776187  401591 start.go:83] releasing machines lock for "ha-628553-m02", held for 24.999226116s
	I1007 12:07:58.776211  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.776496  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:58.779145  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.779528  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.779560  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.782069  401591 out.go:177] * Found network options:
	I1007 12:07:58.783459  401591 out.go:177]   - NO_PROXY=192.168.39.110
	W1007 12:07:58.784861  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:07:58.784899  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785569  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785759  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785866  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:07:58.785905  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	W1007 12:07:58.785978  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:07:58.786070  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:07:58.786094  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.788699  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.788936  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789075  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.789100  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789286  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.789381  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.789402  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789444  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.789536  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.789631  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.789706  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.789783  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.789824  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.789925  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:59.016879  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:07:59.023633  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:07:59.023710  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:07:59.041152  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:07:59.041183  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:07:59.041268  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:07:59.058168  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:07:59.074089  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:07:59.074153  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:07:59.089704  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:07:59.104808  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:07:59.234539  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:07:59.391501  401591 docker.go:233] disabling docker service ...
	I1007 12:07:59.391564  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:07:59.406313  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:07:59.420588  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:07:59.553910  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:07:59.664194  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:07:59.679241  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:07:59.699517  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:07:59.699594  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.710670  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:07:59.710739  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.721864  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.733897  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.746035  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:07:59.757811  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.769881  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.789700  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.800942  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:07:59.811016  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:07:59.811084  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:07:59.827337  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:07:59.838316  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:59.964123  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:08:00.067227  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:08:00.067310  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:08:00.073044  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:08:00.073120  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:08:00.077800  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:08:00.127300  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:08:00.127397  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:08:00.156941  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:08:00.190072  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:08:00.191853  401591 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:08:00.193177  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:08:00.196263  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:08:00.196746  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:08:00.196779  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:08:00.196928  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:08:00.201903  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:00.215603  401591 mustload.go:65] Loading cluster: ha-628553
	I1007 12:08:00.215803  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:00.216063  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:00.216108  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:00.231500  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I1007 12:08:00.231984  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:00.232515  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:00.232538  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:00.232906  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:00.233117  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:08:00.234754  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:08:00.235153  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:00.235205  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:00.251119  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I1007 12:08:00.251713  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:00.252244  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:00.252269  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:00.252599  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:00.252779  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:08:00.252870  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.169
	I1007 12:08:00.252879  401591 certs.go:194] generating shared ca certs ...
	I1007 12:08:00.252902  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.253042  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:08:00.253085  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:08:00.253095  401591 certs.go:256] generating profile certs ...
	I1007 12:08:00.253179  401591 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:08:00.253210  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7
	I1007 12:08:00.253235  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.254]
	I1007 12:08:00.386276  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 ...
	I1007 12:08:00.386312  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7: {Name:mk3203e0eda21b3db6f2dd0a690d84683948f867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.386525  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7 ...
	I1007 12:08:00.386553  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7: {Name:mkfc3d62b17b51155465b7666879f42f7347e54c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.386666  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:08:00.386851  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:08:00.387056  401591 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:08:00.387074  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:08:00.387092  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:08:00.387112  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:08:00.387134  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:08:00.387151  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:08:00.387168  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:08:00.387184  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:08:00.387203  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:08:00.387277  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:08:00.387324  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:08:00.387338  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:08:00.387372  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:08:00.387402  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:08:00.387436  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:08:00.387492  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:08:00.387532  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:08:00.387560  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:08:00.387578  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:00.387630  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:08:00.391299  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:00.391779  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:08:00.391810  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:00.392002  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:08:00.392226  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:08:00.392412  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:08:00.392620  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:08:00.467476  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:08:00.476301  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:08:00.489016  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:08:00.494136  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:08:00.509194  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:08:00.513966  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:08:00.525972  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:08:00.530730  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:08:00.543099  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:08:00.548533  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:08:00.560887  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:08:00.565537  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:08:00.578649  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:08:00.607063  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:08:00.634228  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:08:00.660702  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:08:00.687010  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 12:08:00.713721  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:08:00.740934  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:08:00.768133  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:08:00.794572  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:08:00.820864  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:08:00.847539  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:08:00.876441  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:08:00.895435  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:08:00.913785  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:08:00.932908  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:08:00.951947  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:08:00.969974  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:08:00.988515  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:08:01.007600  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:08:01.014010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:08:01.025708  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.030507  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.030585  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.037094  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:08:01.049368  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:08:01.062454  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.067451  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.067538  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.073743  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:08:01.085386  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:08:01.096871  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.102352  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.102441  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.108559  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:08:01.120791  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:08:01.125796  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:08:01.125854  401591 kubeadm.go:934] updating node {m02 192.168.39.169 8443 v1.31.1 crio true true} ...
	I1007 12:08:01.125945  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:08:01.125972  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:08:01.126011  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:08:01.142927  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:08:01.143035  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:08:01.143100  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:08:01.154825  401591 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:08:01.154901  401591 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:08:01.166246  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:08:01.166280  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:08:01.166330  401591 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 12:08:01.166350  401591 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 12:08:01.166352  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:08:01.171889  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:08:01.171923  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:08:01.865609  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:08:01.865701  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:08:01.871954  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:08:01.872006  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:08:01.960218  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:08:02.002318  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:08:02.002440  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:08:02.020653  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:08:02.020697  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:08:02.500270  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:08:02.510702  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:08:02.529075  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:08:02.546750  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:08:02.565165  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:08:02.569362  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:02.582612  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:02.707124  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:02.725325  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:08:02.725700  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:02.725750  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:02.741913  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I1007 12:08:02.742441  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:02.742930  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:02.742953  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:02.743338  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:02.743547  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:08:02.743717  401591 start.go:317] joinCluster: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:08:02.743844  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:08:02.743869  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:08:02.747217  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:02.747665  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:08:02.747694  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:02.747872  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:08:02.748048  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:08:02.748193  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:08:02.748311  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:08:02.893504  401591 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:02.893569  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xsg4ou.msqa1mnarg4j4fst --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m02 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443"
	I1007 12:08:24.411215  401591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xsg4ou.msqa1mnarg4j4fst --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m02 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443": (21.517602331s)
	I1007 12:08:24.411250  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:08:24.991460  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553-m02 minikube.k8s.io/updated_at=2024_10_07T12_08_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=false
	I1007 12:08:25.149659  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-628553-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:08:25.289097  401591 start.go:319] duration metric: took 22.545377397s to joinCluster
	I1007 12:08:25.289200  401591 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:25.289529  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:25.291070  401591 out.go:177] * Verifying Kubernetes components...
	I1007 12:08:25.292571  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:25.564988  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:25.614504  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:08:25.614869  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:08:25.614979  401591 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:08:25.615327  401591 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:08:25.615461  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:25.615476  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:25.615490  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:25.615502  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:25.627711  401591 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 12:08:26.115662  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:26.115688  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:26.115696  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:26.115700  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:26.119790  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:26.615649  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:26.615673  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:26.615681  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:26.615685  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:26.619911  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.115994  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:27.116020  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:27.116029  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:27.116032  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:27.120154  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.616200  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:27.616222  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:27.616230  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:27.616234  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:27.620627  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.621267  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:28.116293  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:28.116321  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:28.116331  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:28.116337  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:28.121199  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:28.616216  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:28.616252  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:28.616260  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:28.616275  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:28.624618  401591 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:08:29.116125  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:29.116148  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:29.116156  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:29.116161  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:29.143192  401591 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1007 12:08:29.616218  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:29.616252  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:29.616260  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:29.616263  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:29.621645  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:29.622758  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:30.116377  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:30.116414  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:30.116434  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:30.116442  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:30.120276  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:30.616264  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:30.616289  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:30.616298  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:30.616302  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:30.619656  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:31.115662  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:31.115686  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:31.115695  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:31.115698  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:31.120037  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:31.616077  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:31.616103  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:31.616112  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:31.616119  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.027207  401591 round_trippers.go:574] Response Status: 200 OK in 411 milliseconds
	I1007 12:08:32.028035  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:32.116023  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:32.116049  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:32.116061  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:32.116066  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.123800  401591 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:08:32.615910  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:32.615936  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:32.615945  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:32.615949  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.619848  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:33.115622  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:33.115645  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:33.115652  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:33.115657  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:33.119744  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:33.616336  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:33.616363  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:33.616372  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:33.616378  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:33.620139  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:34.116322  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:34.116357  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:34.116368  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:34.116374  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:34.119958  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:34.120614  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:34.615645  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:34.615672  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:34.615682  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:34.615687  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:34.619017  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:35.115922  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:35.115951  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:35.115965  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:35.115969  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:35.119735  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:35.615551  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:35.615578  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:35.615589  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:35.615595  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:35.619854  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:36.115806  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:36.115830  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:36.115839  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:36.115842  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:36.119509  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:36.616590  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:36.616626  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:36.616638  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:36.616646  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:36.620711  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:36.621977  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:37.116201  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:37.116229  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:37.116237  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:37.116241  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:37.119861  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:37.615763  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:37.615789  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:37.615798  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:37.615801  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:37.619542  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:38.116230  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:38.116254  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:38.116262  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:38.116266  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:38.119599  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:38.616300  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:38.616327  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:38.616336  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:38.616340  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:38.622637  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:08:38.623148  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:39.116056  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:39.116089  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:39.116102  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:39.116108  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:39.119313  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:39.615634  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:39.615660  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:39.615668  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:39.615672  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:39.619449  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:40.116288  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:40.116318  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:40.116330  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:40.116337  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:40.120596  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:40.615608  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:40.615636  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:40.615645  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:40.615650  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:40.619654  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:41.115684  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:41.115712  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:41.115723  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:41.115729  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:41.119362  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:41.119941  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:41.616052  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:41.616080  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:41.616092  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:41.616099  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:41.621355  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:42.116153  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:42.116179  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:42.116190  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:42.116195  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:42.119158  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:42.615813  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:42.615838  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:42.615849  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:42.615856  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:42.619479  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.116150  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.116183  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.116193  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.116197  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.119726  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.120412  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:43.615803  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.615825  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.615833  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.615837  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.619282  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.619820  401591 node_ready.go:49] node "ha-628553-m02" has status "Ready":"True"
	I1007 12:08:43.619840  401591 node_ready.go:38] duration metric: took 18.00448517s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:08:43.619850  401591 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:43.619942  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:43.619953  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.619962  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.619968  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.625430  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:43.631358  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.631464  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:08:43.631473  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.631481  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.631485  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.634796  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.635822  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.635842  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.635852  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.635858  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.638589  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.639211  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.639241  401591 pod_ready.go:82] duration metric: took 7.850216ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.639256  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.639336  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:08:43.639349  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.639360  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.639367  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.642168  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.642861  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.642879  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.642885  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.642891  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.645645  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.646131  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.646152  401591 pod_ready.go:82] duration metric: took 6.888201ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.646164  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.646225  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:08:43.646233  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.646240  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.646244  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.649034  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.649700  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.649718  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.649726  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.649731  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.652932  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.653474  401591 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.653494  401591 pod_ready.go:82] duration metric: took 7.324392ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.653506  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.653570  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:08:43.653578  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.653585  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.653589  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.656625  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.657314  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.657332  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.657340  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.657344  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.659929  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.660411  401591 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.660431  401591 pod_ready.go:82] duration metric: took 6.918652ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.660446  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.816876  401591 request.go:632] Waited for 156.326759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:08:43.816939  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:08:43.816943  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.816951  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.816956  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.820806  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.015988  401591 request.go:632] Waited for 194.312012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.016073  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.016081  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.016091  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.016121  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.019609  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.020136  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.020158  401591 pod_ready.go:82] duration metric: took 359.705878ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.020169  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.216359  401591 request.go:632] Waited for 196.109348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:08:44.216441  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:08:44.216449  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.216460  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.216468  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.222633  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:08:44.416891  401591 request.go:632] Waited for 193.411987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:44.416975  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:44.416983  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.416993  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.416999  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.420954  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.421562  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.421582  401591 pod_ready.go:82] duration metric: took 401.406583ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.421592  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.616625  401591 request.go:632] Waited for 194.940502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:08:44.616688  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:08:44.616693  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.616701  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.616707  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.620706  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.815865  401591 request.go:632] Waited for 194.348456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.815947  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.815954  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.815966  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.815972  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.819923  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.820749  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.820767  401591 pod_ready.go:82] duration metric: took 399.169132ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.820778  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.015880  401591 request.go:632] Waited for 195.028084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:08:45.015978  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:08:45.015983  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.015991  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.015997  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.020421  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.216616  401591 request.go:632] Waited for 195.391964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:45.216689  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:45.216696  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.216707  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.216712  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.221024  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.221697  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:45.221728  401591 pod_ready.go:82] duration metric: took 400.942386ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.221743  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.416754  401591 request.go:632] Waited for 194.909444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:08:45.416821  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:08:45.416834  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.416842  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.416848  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.421020  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.616294  401591 request.go:632] Waited for 194.468244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:45.616378  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:45.616387  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.616399  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.616406  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.620542  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.621474  401591 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:45.621500  401591 pod_ready.go:82] duration metric: took 399.748616ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.621515  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.816631  401591 request.go:632] Waited for 195.03231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:08:45.816699  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:08:45.816705  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.816713  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.816718  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.820607  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:46.016805  401591 request.go:632] Waited for 195.41966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.016911  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.016918  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.016926  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.016930  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.021351  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.021889  401591 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.021914  401591 pod_ready.go:82] duration metric: took 400.391171ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.021926  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.215992  401591 request.go:632] Waited for 193.955382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:08:46.216085  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:08:46.216092  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.216102  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.216108  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.219547  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:46.416084  401591 request.go:632] Waited for 195.950012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:46.416159  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:46.416167  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.416179  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.416198  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.420356  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.420972  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.420993  401591 pod_ready.go:82] duration metric: took 399.057557ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.421005  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.616254  401591 request.go:632] Waited for 195.135703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:08:46.616343  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:08:46.616355  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.616366  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.616375  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.625428  401591 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:08:46.816391  401591 request.go:632] Waited for 190.390972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.816468  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.816473  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.816482  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.816488  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.820601  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.821110  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.821133  401591 pod_ready.go:82] duration metric: took 400.121331ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.821145  401591 pod_ready.go:39] duration metric: took 3.201283112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:46.821161  401591 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:08:46.821222  401591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:08:46.839291  401591 api_server.go:72] duration metric: took 21.550041864s to wait for apiserver process to appear ...
	I1007 12:08:46.839326  401591 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:08:46.839354  401591 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:08:46.845263  401591 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:08:46.845352  401591 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:08:46.845360  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.845369  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.845373  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.846772  401591 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1007 12:08:46.846883  401591 api_server.go:141] control plane version: v1.31.1
	I1007 12:08:46.846902  401591 api_server.go:131] duration metric: took 7.569264ms to wait for apiserver health ...
	I1007 12:08:46.846910  401591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:08:47.016224  401591 request.go:632] Waited for 169.208213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.016315  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.016324  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.016337  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.016348  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.021945  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:47.026191  401591 system_pods.go:59] 17 kube-system pods found
	I1007 12:08:47.026232  401591 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:08:47.026238  401591 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:08:47.026242  401591 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:08:47.026246  401591 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:08:47.026251  401591 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:08:47.026255  401591 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:08:47.026260  401591 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:08:47.026264  401591 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:08:47.026268  401591 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:08:47.026273  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:08:47.026276  401591 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:08:47.026279  401591 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:08:47.026282  401591 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:08:47.026285  401591 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:08:47.026288  401591 system_pods.go:61] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:08:47.026291  401591 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:08:47.026294  401591 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:08:47.026300  401591 system_pods.go:74] duration metric: took 179.385599ms to wait for pod list to return data ...
	I1007 12:08:47.026311  401591 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:08:47.216777  401591 request.go:632] Waited for 190.349118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:08:47.216844  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:08:47.216851  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.216861  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.216867  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.220501  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:47.220765  401591 default_sa.go:45] found service account: "default"
	I1007 12:08:47.220790  401591 default_sa.go:55] duration metric: took 194.471685ms for default service account to be created ...
	I1007 12:08:47.220803  401591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:08:47.416131  401591 request.go:632] Waited for 195.245207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.416207  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.416215  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.416224  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.416238  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.422085  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:47.426776  401591 system_pods.go:86] 17 kube-system pods found
	I1007 12:08:47.426812  401591 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:08:47.426820  401591 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:08:47.426826  401591 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:08:47.426832  401591 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:08:47.426837  401591 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:08:47.426842  401591 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:08:47.426848  401591 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:08:47.426853  401591 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:08:47.426858  401591 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:08:47.426863  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:08:47.426868  401591 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:08:47.426873  401591 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:08:47.426881  401591 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:08:47.426887  401591 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:08:47.426892  401591 system_pods.go:89] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:08:47.426898  401591 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:08:47.426907  401591 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:08:47.426918  401591 system_pods.go:126] duration metric: took 206.105758ms to wait for k8s-apps to be running ...
	I1007 12:08:47.426931  401591 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:08:47.427006  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:08:47.444273  401591 system_svc.go:56] duration metric: took 17.328443ms WaitForService to wait for kubelet
	I1007 12:08:47.444313  401591 kubeadm.go:582] duration metric: took 22.155070744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:08:47.444339  401591 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:08:47.616864  401591 request.go:632] Waited for 172.422315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:08:47.616938  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:08:47.616945  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.616961  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.616969  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.621972  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:47.622888  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:08:47.622919  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:08:47.622945  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:08:47.622950  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:08:47.622955  401591 node_conditions.go:105] duration metric: took 178.610758ms to run NodePressure ...
	I1007 12:08:47.622983  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:08:47.623014  401591 start.go:255] writing updated cluster config ...
	I1007 12:08:47.625468  401591 out.go:201] 
	I1007 12:08:47.627200  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:47.627328  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:08:47.629319  401591 out.go:177] * Starting "ha-628553-m03" control-plane node in "ha-628553" cluster
	I1007 12:08:47.630767  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:08:47.630807  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:08:47.630955  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:08:47.630986  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:08:47.631145  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:08:47.631383  401591 start.go:360] acquireMachinesLock for ha-628553-m03: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:08:47.631439  401591 start.go:364] duration metric: took 32.151µs to acquireMachinesLock for "ha-628553-m03"
	I1007 12:08:47.631463  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:47.631573  401591 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 12:08:47.633396  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:08:47.633527  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:47.633570  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:47.650117  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I1007 12:08:47.650636  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:47.651158  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:47.651181  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:47.651622  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:47.651783  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:08:47.651941  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:08:47.652092  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:08:47.652123  401591 client.go:168] LocalClient.Create starting
	I1007 12:08:47.652165  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:08:47.652208  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:08:47.652231  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:08:47.652328  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:08:47.652361  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:08:47.652377  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:08:47.652400  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:08:47.652412  401591 main.go:141] libmachine: (ha-628553-m03) Calling .PreCreateCheck
	I1007 12:08:47.652572  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:08:47.652989  401591 main.go:141] libmachine: Creating machine...
	I1007 12:08:47.653006  401591 main.go:141] libmachine: (ha-628553-m03) Calling .Create
	I1007 12:08:47.653161  401591 main.go:141] libmachine: (ha-628553-m03) Creating KVM machine...
	I1007 12:08:47.654461  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found existing default KVM network
	I1007 12:08:47.654504  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found existing private KVM network mk-ha-628553
	I1007 12:08:47.654721  401591 main.go:141] libmachine: (ha-628553-m03) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 ...
	I1007 12:08:47.654751  401591 main.go:141] libmachine: (ha-628553-m03) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:08:47.654817  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:47.654705  402350 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:08:47.654927  401591 main.go:141] libmachine: (ha-628553-m03) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:08:47.943561  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:47.943397  402350 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa...
	I1007 12:08:48.157872  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:48.157710  402350 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/ha-628553-m03.rawdisk...
	I1007 12:08:48.157916  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Writing magic tar header
	I1007 12:08:48.157932  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Writing SSH key tar header
	I1007 12:08:48.157944  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:48.157825  402350 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 ...
	I1007 12:08:48.157970  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03
	I1007 12:08:48.158063  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 (perms=drwx------)
	I1007 12:08:48.158107  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:08:48.158121  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:08:48.158141  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:08:48.158150  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:08:48.158232  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:08:48.158257  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:08:48.158266  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:08:48.158280  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:08:48.158289  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:08:48.158307  401591 main.go:141] libmachine: (ha-628553-m03) Creating domain...
	I1007 12:08:48.158321  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:08:48.158335  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home
	I1007 12:08:48.158350  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Skipping /home - not owner
	I1007 12:08:48.159295  401591 main.go:141] libmachine: (ha-628553-m03) define libvirt domain using xml: 
	I1007 12:08:48.159314  401591 main.go:141] libmachine: (ha-628553-m03) <domain type='kvm'>
	I1007 12:08:48.159321  401591 main.go:141] libmachine: (ha-628553-m03)   <name>ha-628553-m03</name>
	I1007 12:08:48.159327  401591 main.go:141] libmachine: (ha-628553-m03)   <memory unit='MiB'>2200</memory>
	I1007 12:08:48.159361  401591 main.go:141] libmachine: (ha-628553-m03)   <vcpu>2</vcpu>
	I1007 12:08:48.159380  401591 main.go:141] libmachine: (ha-628553-m03)   <features>
	I1007 12:08:48.159389  401591 main.go:141] libmachine: (ha-628553-m03)     <acpi/>
	I1007 12:08:48.159398  401591 main.go:141] libmachine: (ha-628553-m03)     <apic/>
	I1007 12:08:48.159406  401591 main.go:141] libmachine: (ha-628553-m03)     <pae/>
	I1007 12:08:48.159416  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159423  401591 main.go:141] libmachine: (ha-628553-m03)   </features>
	I1007 12:08:48.159430  401591 main.go:141] libmachine: (ha-628553-m03)   <cpu mode='host-passthrough'>
	I1007 12:08:48.159437  401591 main.go:141] libmachine: (ha-628553-m03)   
	I1007 12:08:48.159446  401591 main.go:141] libmachine: (ha-628553-m03)   </cpu>
	I1007 12:08:48.159455  401591 main.go:141] libmachine: (ha-628553-m03)   <os>
	I1007 12:08:48.159465  401591 main.go:141] libmachine: (ha-628553-m03)     <type>hvm</type>
	I1007 12:08:48.159477  401591 main.go:141] libmachine: (ha-628553-m03)     <boot dev='cdrom'/>
	I1007 12:08:48.159488  401591 main.go:141] libmachine: (ha-628553-m03)     <boot dev='hd'/>
	I1007 12:08:48.159499  401591 main.go:141] libmachine: (ha-628553-m03)     <bootmenu enable='no'/>
	I1007 12:08:48.159508  401591 main.go:141] libmachine: (ha-628553-m03)   </os>
	I1007 12:08:48.159518  401591 main.go:141] libmachine: (ha-628553-m03)   <devices>
	I1007 12:08:48.159527  401591 main.go:141] libmachine: (ha-628553-m03)     <disk type='file' device='cdrom'>
	I1007 12:08:48.159543  401591 main.go:141] libmachine: (ha-628553-m03)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/boot2docker.iso'/>
	I1007 12:08:48.159554  401591 main.go:141] libmachine: (ha-628553-m03)       <target dev='hdc' bus='scsi'/>
	I1007 12:08:48.159561  401591 main.go:141] libmachine: (ha-628553-m03)       <readonly/>
	I1007 12:08:48.159571  401591 main.go:141] libmachine: (ha-628553-m03)     </disk>
	I1007 12:08:48.159579  401591 main.go:141] libmachine: (ha-628553-m03)     <disk type='file' device='disk'>
	I1007 12:08:48.159596  401591 main.go:141] libmachine: (ha-628553-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:08:48.159611  401591 main.go:141] libmachine: (ha-628553-m03)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/ha-628553-m03.rawdisk'/>
	I1007 12:08:48.159621  401591 main.go:141] libmachine: (ha-628553-m03)       <target dev='hda' bus='virtio'/>
	I1007 12:08:48.159629  401591 main.go:141] libmachine: (ha-628553-m03)     </disk>
	I1007 12:08:48.159639  401591 main.go:141] libmachine: (ha-628553-m03)     <interface type='network'>
	I1007 12:08:48.159647  401591 main.go:141] libmachine: (ha-628553-m03)       <source network='mk-ha-628553'/>
	I1007 12:08:48.159659  401591 main.go:141] libmachine: (ha-628553-m03)       <model type='virtio'/>
	I1007 12:08:48.159667  401591 main.go:141] libmachine: (ha-628553-m03)     </interface>
	I1007 12:08:48.159677  401591 main.go:141] libmachine: (ha-628553-m03)     <interface type='network'>
	I1007 12:08:48.159685  401591 main.go:141] libmachine: (ha-628553-m03)       <source network='default'/>
	I1007 12:08:48.159695  401591 main.go:141] libmachine: (ha-628553-m03)       <model type='virtio'/>
	I1007 12:08:48.159702  401591 main.go:141] libmachine: (ha-628553-m03)     </interface>
	I1007 12:08:48.159711  401591 main.go:141] libmachine: (ha-628553-m03)     <serial type='pty'>
	I1007 12:08:48.159722  401591 main.go:141] libmachine: (ha-628553-m03)       <target port='0'/>
	I1007 12:08:48.159732  401591 main.go:141] libmachine: (ha-628553-m03)     </serial>
	I1007 12:08:48.159741  401591 main.go:141] libmachine: (ha-628553-m03)     <console type='pty'>
	I1007 12:08:48.159751  401591 main.go:141] libmachine: (ha-628553-m03)       <target type='serial' port='0'/>
	I1007 12:08:48.159759  401591 main.go:141] libmachine: (ha-628553-m03)     </console>
	I1007 12:08:48.159769  401591 main.go:141] libmachine: (ha-628553-m03)     <rng model='virtio'>
	I1007 12:08:48.159779  401591 main.go:141] libmachine: (ha-628553-m03)       <backend model='random'>/dev/random</backend>
	I1007 12:08:48.159786  401591 main.go:141] libmachine: (ha-628553-m03)     </rng>
	I1007 12:08:48.159791  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159796  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159801  401591 main.go:141] libmachine: (ha-628553-m03)   </devices>
	I1007 12:08:48.159807  401591 main.go:141] libmachine: (ha-628553-m03) </domain>
	I1007 12:08:48.159814  401591 main.go:141] libmachine: (ha-628553-m03) 
	I1007 12:08:48.167454  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:19:9b:6c in network default
	I1007 12:08:48.168104  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring networks are active...
	I1007 12:08:48.168135  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:48.168903  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring network default is active
	I1007 12:08:48.169240  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring network mk-ha-628553 is active
	I1007 12:08:48.169699  401591 main.go:141] libmachine: (ha-628553-m03) Getting domain xml...
	I1007 12:08:48.170532  401591 main.go:141] libmachine: (ha-628553-m03) Creating domain...
	I1007 12:08:49.440366  401591 main.go:141] libmachine: (ha-628553-m03) Waiting to get IP...
	I1007 12:08:49.441248  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:49.441739  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:49.441772  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:49.441711  402350 retry.go:31] will retry after 304.052486ms: waiting for machine to come up
	I1007 12:08:49.747277  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:49.747963  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:49.747996  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:49.747904  402350 retry.go:31] will retry after 363.120796ms: waiting for machine to come up
	I1007 12:08:50.113364  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.113854  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.113886  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.113784  402350 retry.go:31] will retry after 318.214065ms: waiting for machine to come up
	I1007 12:08:50.434117  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.434742  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.434772  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.434669  402350 retry.go:31] will retry after 557.05591ms: waiting for machine to come up
	I1007 12:08:50.993368  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.993877  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.993902  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.993839  402350 retry.go:31] will retry after 534.862367ms: waiting for machine to come up
	I1007 12:08:51.530722  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:51.531299  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:51.531330  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:51.531236  402350 retry.go:31] will retry after 674.225428ms: waiting for machine to come up
	I1007 12:08:52.207219  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:52.207779  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:52.207805  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:52.207744  402350 retry.go:31] will retry after 750.38088ms: waiting for machine to come up
	I1007 12:08:52.959912  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:52.960419  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:52.960456  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:52.960375  402350 retry.go:31] will retry after 1.032745665s: waiting for machine to come up
	I1007 12:08:53.994776  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:53.995316  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:53.995345  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:53.995259  402350 retry.go:31] will retry after 1.174624993s: waiting for machine to come up
	I1007 12:08:55.171247  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:55.171687  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:55.171709  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:55.171640  402350 retry.go:31] will retry after 2.315279218s: waiting for machine to come up
	I1007 12:08:57.488351  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:57.488810  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:57.488838  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:57.488771  402350 retry.go:31] will retry after 1.769995019s: waiting for machine to come up
	I1007 12:08:59.260072  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:59.260605  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:59.260637  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:59.260547  402350 retry.go:31] will retry after 3.352254545s: waiting for machine to come up
	I1007 12:09:02.616362  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:02.616828  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:09:02.616850  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:09:02.616780  402350 retry.go:31] will retry after 4.496920566s: waiting for machine to come up
	I1007 12:09:07.118974  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:07.119565  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:09:07.119593  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:09:07.119492  402350 retry.go:31] will retry after 4.132199874s: waiting for machine to come up
	I1007 12:09:11.256196  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.256790  401591 main.go:141] libmachine: (ha-628553-m03) Found IP for machine: 192.168.39.149
	I1007 12:09:11.256824  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has current primary IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.256833  401591 main.go:141] libmachine: (ha-628553-m03) Reserving static IP address...
	I1007 12:09:11.257175  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find host DHCP lease matching {name: "ha-628553-m03", mac: "52:54:00:3c:9f:34", ip: "192.168.39.149"} in network mk-ha-628553
	I1007 12:09:11.338093  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Getting to WaitForSSH function...
	I1007 12:09:11.338124  401591 main.go:141] libmachine: (ha-628553-m03) Reserved static IP address: 192.168.39.149
	I1007 12:09:11.338139  401591 main.go:141] libmachine: (ha-628553-m03) Waiting for SSH to be available...
	I1007 12:09:11.341396  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.341892  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.341925  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.342105  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH client type: external
	I1007 12:09:11.342133  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa (-rw-------)
	I1007 12:09:11.342177  401591 main.go:141] libmachine: (ha-628553-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:09:11.342197  401591 main.go:141] libmachine: (ha-628553-m03) DBG | About to run SSH command:
	I1007 12:09:11.342214  401591 main.go:141] libmachine: (ha-628553-m03) DBG | exit 0
	I1007 12:09:11.471281  401591 main.go:141] libmachine: (ha-628553-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 12:09:11.471621  401591 main.go:141] libmachine: (ha-628553-m03) KVM machine creation complete!
	I1007 12:09:11.471952  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:09:11.472582  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:11.472840  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:11.473024  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:09:11.473037  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetState
	I1007 12:09:11.474527  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:09:11.474548  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:09:11.474555  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:09:11.474563  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.477303  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.477650  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.477666  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.477788  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.477993  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.478174  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.478306  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.478470  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.478702  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.478716  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:09:11.587071  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:09:11.587095  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:09:11.587105  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.589883  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.590265  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.590295  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.590447  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.590647  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.590829  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.591025  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.591169  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.591356  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.591367  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:09:11.704302  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:09:11.704403  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:09:11.704415  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:09:11.704426  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.704723  401591 buildroot.go:166] provisioning hostname "ha-628553-m03"
	I1007 12:09:11.704750  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.704905  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.707646  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.708032  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.708062  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.708204  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.708466  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.708666  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.708795  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.708972  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.709229  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.709247  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m03 && echo "ha-628553-m03" | sudo tee /etc/hostname
	I1007 12:09:11.834437  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m03
	
	I1007 12:09:11.834498  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.837609  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.837983  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.838013  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.838374  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.838612  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.838805  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.839005  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.839175  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.839394  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.839420  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:09:11.962733  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:09:11.962765  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:09:11.962788  401591 buildroot.go:174] setting up certificates
	I1007 12:09:11.962801  401591 provision.go:84] configureAuth start
	I1007 12:09:11.962814  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.963127  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:11.965755  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.966166  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.966201  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.966379  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.968397  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.968678  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.968703  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.968812  401591 provision.go:143] copyHostCerts
	I1007 12:09:11.968847  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:09:11.968897  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:09:11.968910  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:09:11.968994  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:09:11.969133  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:09:11.969163  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:09:11.969173  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:09:11.969222  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:09:11.969301  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:09:11.969326  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:09:11.969332  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:09:11.969367  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:09:11.969444  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m03 san=[127.0.0.1 192.168.39.149 ha-628553-m03 localhost minikube]
	I1007 12:09:12.008085  401591 provision.go:177] copyRemoteCerts
	I1007 12:09:12.008153  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:09:12.008198  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.011020  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.011447  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.011479  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.011639  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.011896  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.012077  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.012241  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.099103  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:09:12.099196  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:09:12.129470  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:09:12.129570  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:09:12.156229  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:09:12.156324  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:09:12.182409  401591 provision.go:87] duration metric: took 219.592268ms to configureAuth
	I1007 12:09:12.182440  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:09:12.182689  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:12.182805  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.186445  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.186906  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.186942  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.187197  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.187409  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.187561  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.187701  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.187919  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:12.188176  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:12.188201  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:09:12.442162  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:09:12.442201  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:09:12.442252  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetURL
	I1007 12:09:12.443642  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using libvirt version 6000000
	I1007 12:09:12.445960  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.446454  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.446484  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.446704  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:09:12.446717  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:09:12.446724  401591 client.go:171] duration metric: took 24.794590297s to LocalClient.Create
	I1007 12:09:12.446748  401591 start.go:167] duration metric: took 24.794658821s to libmachine.API.Create "ha-628553"
	I1007 12:09:12.446758  401591 start.go:293] postStartSetup for "ha-628553-m03" (driver="kvm2")
	I1007 12:09:12.446768  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:09:12.446787  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.447044  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:09:12.447067  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.449182  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.449535  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.449578  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.449689  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.449866  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.450019  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.450128  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.538407  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:09:12.543112  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:09:12.543143  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:09:12.543238  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:09:12.543327  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:09:12.543349  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:09:12.543452  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:09:12.553965  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:09:12.580260  401591 start.go:296] duration metric: took 133.488077ms for postStartSetup
	I1007 12:09:12.580320  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:09:12.580945  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:12.583692  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.584096  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.584119  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.584577  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:09:12.584810  401591 start.go:128] duration metric: took 24.953224798s to createHost
	I1007 12:09:12.584834  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.586899  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.587276  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.587304  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.587460  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.587666  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.587811  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.587989  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.588157  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:12.588403  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:12.588416  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:09:12.699909  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302952.675618146
	
	I1007 12:09:12.699944  401591 fix.go:216] guest clock: 1728302952.675618146
	I1007 12:09:12.699957  401591 fix.go:229] Guest: 2024-10-07 12:09:12.675618146 +0000 UTC Remote: 2024-10-07 12:09:12.584823089 +0000 UTC m=+146.376856843 (delta=90.795057ms)
	I1007 12:09:12.699983  401591 fix.go:200] guest clock delta is within tolerance: 90.795057ms
	I1007 12:09:12.700015  401591 start.go:83] releasing machines lock for "ha-628553-m03", held for 25.068545198s
	I1007 12:09:12.700046  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.700343  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:12.703273  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.703654  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.703685  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.706106  401591 out.go:177] * Found network options:
	I1007 12:09:12.707602  401591 out.go:177]   - NO_PROXY=192.168.39.110,192.168.39.169
	W1007 12:09:12.709074  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:09:12.709105  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:09:12.709125  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.709903  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.710157  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.710281  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:09:12.710326  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	W1007 12:09:12.710331  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:09:12.710350  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:09:12.710418  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:09:12.710435  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.713091  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713270  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713549  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.713577  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713688  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.713709  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713890  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.713892  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.714094  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.714096  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.714290  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.714293  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.714448  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.714465  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.965758  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:09:12.972410  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:09:12.972510  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:09:12.991892  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:09:12.991924  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:09:12.992029  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:09:13.011092  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:09:13.027119  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:09:13.027197  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:09:13.043881  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:09:13.059996  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:09:13.194059  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:09:13.363286  401591 docker.go:233] disabling docker service ...
	I1007 12:09:13.363388  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:09:13.380238  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:09:13.395090  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:09:13.539822  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:09:13.684666  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:09:13.699806  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:09:13.721312  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:09:13.721394  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.734593  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:09:13.734678  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.746652  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.758752  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.770649  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:09:13.783579  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.796044  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.816090  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.829211  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:09:13.841584  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:09:13.841652  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:09:13.858346  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:09:13.870682  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:14.015562  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:09:14.112385  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:09:14.112472  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:09:14.117706  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:09:14.117785  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:09:14.121973  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:09:14.164678  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:09:14.164778  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:09:14.195026  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:09:14.228305  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:09:14.229710  401591 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:09:14.230954  401591 out.go:177]   - env NO_PROXY=192.168.39.110,192.168.39.169
	I1007 12:09:14.232215  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:14.235268  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:14.236414  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:14.236455  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:14.236834  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:09:14.241615  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:09:14.255885  401591 mustload.go:65] Loading cluster: ha-628553
	I1007 12:09:14.256171  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:14.256468  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:14.256525  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:14.272191  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35203
	I1007 12:09:14.272704  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:14.273292  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:14.273317  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:14.273675  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:14.273860  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:09:14.275739  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:09:14.276042  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:14.276078  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:14.291563  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34379
	I1007 12:09:14.291960  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:14.292503  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:14.292525  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:14.292841  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:14.293029  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:09:14.293266  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.149
	I1007 12:09:14.293282  401591 certs.go:194] generating shared ca certs ...
	I1007 12:09:14.293298  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.293454  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:09:14.293500  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:09:14.293518  401591 certs.go:256] generating profile certs ...
	I1007 12:09:14.293595  401591 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:09:14.293624  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5
	I1007 12:09:14.293644  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.149 192.168.39.254]
	I1007 12:09:14.510662  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 ...
	I1007 12:09:14.510698  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5: {Name:mke401c308480be9f53e9bff701f2e9e4cf3af88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.510883  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5 ...
	I1007 12:09:14.510897  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5: {Name:mk6ef257f67983b566726de1c934d8565c12b533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.510988  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:09:14.511123  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:09:14.511263  401591 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:09:14.511281  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:09:14.511294  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:09:14.511306  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:09:14.511318  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:09:14.511328  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:09:14.511341  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:09:14.511350  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:09:14.551130  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:09:14.551306  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:09:14.551354  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:09:14.551363  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:09:14.551385  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:09:14.551414  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:09:14.551453  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:09:14.551518  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:09:14.551570  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:14.551588  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:09:14.551601  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:09:14.551640  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:09:14.554905  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:14.555423  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:09:14.555460  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:14.555653  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:09:14.555879  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:09:14.556052  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:09:14.556195  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:09:14.631352  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:09:14.636908  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:09:14.651074  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:09:14.656279  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:09:14.669909  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:09:14.674787  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:09:14.685770  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:09:14.690694  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:09:14.702721  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:09:14.707691  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:09:14.719165  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:09:14.724048  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:09:14.737169  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:09:14.766716  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:09:14.794736  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:09:14.821693  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:09:14.848771  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:09:14.877403  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:09:14.903816  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:09:14.930704  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:09:14.958763  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:09:14.986639  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:09:15.012198  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:09:15.040552  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:09:15.060843  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:09:15.079624  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:09:15.099559  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:09:15.119015  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:09:15.138902  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:09:15.157844  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:09:15.176996  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:09:15.183306  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:09:15.195832  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.201336  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.201442  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.208010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:09:15.220845  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:09:15.233290  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.238387  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.238463  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.245368  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:09:15.257699  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:09:15.270151  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.274983  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.275048  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.281100  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:09:15.293845  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:09:15.298173  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:09:15.298242  401591 kubeadm.go:934] updating node {m03 192.168.39.149 8443 v1.31.1 crio true true} ...
	I1007 12:09:15.298356  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:09:15.298388  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:09:15.298436  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:09:15.316713  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:09:15.316806  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:09:15.316885  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:09:15.329178  401591 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:09:15.329260  401591 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:09:15.341535  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 12:09:15.341551  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 12:09:15.341569  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:09:15.341576  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:09:15.341585  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:09:15.341597  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:09:15.341641  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:09:15.341660  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:09:15.361141  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:09:15.361169  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:09:15.361188  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:09:15.361231  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:09:15.361273  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:09:15.361282  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:09:15.386048  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:09:15.386094  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:09:16.354010  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:09:16.365447  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:09:16.386247  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:09:16.405656  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:09:16.424160  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:09:16.428897  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:09:16.443784  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:16.576452  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:09:16.595070  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:09:16.595602  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:16.595675  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:16.612706  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40581
	I1007 12:09:16.613341  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:16.613998  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:16.614030  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:16.614425  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:16.614648  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:09:16.614817  401591 start.go:317] joinCluster: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:09:16.615034  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:09:16.615063  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:09:16.618382  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:16.618897  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:09:16.618931  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:16.619128  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:09:16.619318  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:09:16.619512  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:09:16.619676  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:09:16.786244  401591 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:09:16.786300  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lajva.py7n2yqd96dw6gb3 --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m03 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443"
	I1007 12:09:40.133777  401591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lajva.py7n2yqd96dw6gb3 --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m03 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443": (23.347442914s)
	I1007 12:09:40.133833  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:09:40.642262  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553-m03 minikube.k8s.io/updated_at=2024_10_07T12_09_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=false
	I1007 12:09:40.798800  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-628553-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:09:40.938486  401591 start.go:319] duration metric: took 24.323665443s to joinCluster
	I1007 12:09:40.938574  401591 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:09:40.938992  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:40.939839  401591 out.go:177] * Verifying Kubernetes components...
	I1007 12:09:40.941073  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:41.179331  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:09:41.207454  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:09:41.207837  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:09:41.207937  401591 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:09:41.208281  401591 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m03" to be "Ready" ...
	I1007 12:09:41.208393  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:41.208405  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:41.208416  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:41.208425  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:41.212516  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:41.709058  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:41.709088  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:41.709105  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:41.709111  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:41.712889  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:42.209244  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:42.209270  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:42.209282  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:42.209291  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:42.215411  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:42.708822  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:42.708852  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:42.708859  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:42.708864  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:42.712350  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:43.208783  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:43.208814  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:43.208825  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:43.208830  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:43.212641  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:43.213313  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:43.708554  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:43.708586  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:43.708598  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:43.708603  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:43.712869  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:44.209341  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:44.209369  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:44.209378  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:44.209383  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:44.213843  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:44.708627  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:44.708655  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:44.708667  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:44.708674  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:44.712946  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:45.208740  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:45.208767  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:45.208780  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:45.208787  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:45.212825  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:45.213803  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:45.709194  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:45.709226  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:45.709239  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:45.709244  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:45.713036  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:46.209154  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:46.209181  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:46.209192  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:46.209196  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:46.212466  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:46.708677  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:46.708707  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:46.708716  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:46.708724  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:46.712340  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:47.208818  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:47.208842  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:47.208851  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:47.208857  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:47.212615  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:47.709164  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:47.709193  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:47.709202  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:47.709205  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:47.713234  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:47.713781  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:48.209498  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:48.209525  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:48.209534  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:48.209537  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:48.213755  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:48.708587  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:48.708611  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:48.708621  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:48.708624  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:48.712036  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:49.208568  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:49.208592  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:49.208603  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:49.208607  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:49.211903  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:49.708691  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:49.708716  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:49.708725  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:49.708729  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:49.712776  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:50.208877  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:50.208902  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:50.208911  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:50.208914  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:50.212493  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:50.213081  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:50.709538  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:50.709562  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:50.709571  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:50.709575  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:50.713279  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:51.209230  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:51.209256  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:51.209265  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:51.209268  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:51.213382  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:51.708830  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:51.708854  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:51.708862  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:51.708866  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:51.712240  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:52.208900  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:52.208926  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:52.208939  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:52.208946  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:52.215313  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:52.216003  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:52.708705  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:52.708730  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:52.708738  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:52.708742  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:52.712616  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:53.209443  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:53.209470  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:53.209480  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:53.209484  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:53.220542  401591 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:09:53.709519  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:53.709546  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:53.709558  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:53.709564  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:53.716163  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:54.208707  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:54.208734  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:54.208746  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:54.208760  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:54.213435  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:54.708587  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:54.708610  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:54.708619  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:54.708622  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:54.712056  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:54.712859  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:55.209203  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:55.209231  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:55.209239  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:55.209245  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:55.212768  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:55.708667  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:55.708695  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:55.708703  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:55.708707  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:55.712313  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.209354  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:56.209383  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.209395  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.209403  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.213377  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.708881  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:56.708908  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.708919  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.708924  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.712370  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.712935  401591 node_ready.go:49] node "ha-628553-m03" has status "Ready":"True"
	I1007 12:09:56.712963  401591 node_ready.go:38] duration metric: took 15.504655916s for node "ha-628553-m03" to be "Ready" ...
	I1007 12:09:56.712977  401591 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:09:56.713073  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:09:56.713085  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.713097  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.713103  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.718978  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:09:56.726344  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.726456  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:09:56.726466  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.726474  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.726490  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.730546  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:56.731604  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.731626  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.731635  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.731641  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.735028  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.735631  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.735652  401591 pod_ready.go:82] duration metric: took 9.273238ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.735664  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.735733  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:09:56.735741  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.735750  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.735755  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.739406  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.740176  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.740199  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.740209  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.740214  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.743560  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.744246  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.744282  401591 pod_ready.go:82] duration metric: took 8.60988ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.744297  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.744377  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:09:56.744385  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.744394  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.744399  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.747762  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.748602  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.748620  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.748631  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.748635  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.751819  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.752620  401591 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.752643  401591 pod_ready.go:82] duration metric: took 8.33893ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.752653  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.752721  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:09:56.752728  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.752736  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.752744  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.755841  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.756900  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:56.756919  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.756928  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.756933  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.762051  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:09:56.762546  401591 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.762567  401591 pod_ready.go:82] duration metric: took 9.907016ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.762577  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.908942  401591 request.go:632] Waited for 146.263139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:09:56.909015  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:09:56.909020  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.909028  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.909033  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.912564  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.109760  401591 request.go:632] Waited for 196.38743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:57.109828  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:57.109833  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.109841  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.109845  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.113445  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.114014  401591 pod_ready.go:93] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.114033  401591 pod_ready.go:82] duration metric: took 351.449136ms for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.114057  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.309353  401591 request.go:632] Waited for 195.205622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:09:57.309419  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:09:57.309425  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.309432  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.309437  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.313075  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.509082  401591 request.go:632] Waited for 195.305317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:57.509151  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:57.509155  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.509166  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.509174  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.512625  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.513112  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.513132  401591 pod_ready.go:82] duration metric: took 399.067745ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.513143  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.709708  401591 request.go:632] Waited for 196.474408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:09:57.709781  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:09:57.709786  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.709794  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.709800  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.713831  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:57.908898  401591 request.go:632] Waited for 194.228676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:57.908982  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:57.908989  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.909010  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.909018  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.912443  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.912928  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.912946  401591 pod_ready.go:82] duration metric: took 399.796848ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.912957  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.109126  401591 request.go:632] Waited for 196.089672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:09:58.109228  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:09:58.109239  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.109254  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.109263  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.113302  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:58.309458  401591 request.go:632] Waited for 195.377342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:58.309526  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:58.309532  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.309540  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.309547  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.313264  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.313917  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:58.313941  401591 pod_ready.go:82] duration metric: took 400.976971ms for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.313953  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.508886  401591 request.go:632] Waited for 194.833329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:09:58.508952  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:09:58.508957  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.508965  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.508968  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.512699  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.709582  401591 request.go:632] Waited for 196.246847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:58.709646  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:58.709651  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.709659  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.709664  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.713267  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.713852  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:58.713872  401591 pod_ready.go:82] duration metric: took 399.911675ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.713882  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.909557  401591 request.go:632] Waited for 195.589727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:09:58.909638  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:09:58.909646  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.909658  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.909667  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.913323  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.109300  401591 request.go:632] Waited for 195.248412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:59.109385  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:59.109397  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.109413  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.109423  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.113724  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.114391  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.114424  401591 pod_ready.go:82] duration metric: took 400.532344ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.114440  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.309421  401591 request.go:632] Waited for 194.863237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:09:59.309496  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:09:59.309505  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.309513  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.309517  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.313524  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.509863  401591 request.go:632] Waited for 195.376113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.509933  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.509939  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.509947  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.509952  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.514238  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.514980  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.515006  401591 pod_ready.go:82] duration metric: took 400.556348ms for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.515021  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.708902  401591 request.go:632] Waited for 193.788377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:09:59.708979  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:09:59.708984  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.708994  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.708999  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.713254  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.909528  401591 request.go:632] Waited for 195.290175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.909618  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.909629  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.909647  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.909670  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.913334  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.913821  401591 pod_ready.go:93] pod "kube-proxy-956k4" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.913839  401591 pod_ready.go:82] duration metric: took 398.810891ms for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.913849  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.108920  401591 request.go:632] Waited for 194.960284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:10:00.108989  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:10:00.108994  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.109003  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.109008  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.112562  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.309314  401591 request.go:632] Waited for 195.880007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:00.309383  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:00.309388  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.309398  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.309402  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.312741  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.313358  401591 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:00.313387  401591 pod_ready.go:82] duration metric: took 399.529803ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.313403  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.509443  401591 request.go:632] Waited for 195.933785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:10:00.509525  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:10:00.509534  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.509546  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.509553  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.513184  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.709406  401591 request.go:632] Waited for 195.365479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:00.709504  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:00.709514  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.709522  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.709529  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.713607  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:00.714279  401591 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:00.714309  401591 pod_ready.go:82] duration metric: took 400.896557ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.714325  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.909245  401591 request.go:632] Waited for 194.818143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:10:00.909342  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:10:00.909351  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.909364  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.909371  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.915481  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:01.109624  401591 request.go:632] Waited for 193.409101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:01.109691  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:01.109697  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.109705  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.109709  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.113699  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.114360  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.114385  401591 pod_ready.go:82] duration metric: took 400.050276ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.114400  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.309693  401591 request.go:632] Waited for 195.205987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:10:01.309795  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:10:01.309803  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.309815  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.309822  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.313815  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.508909  401591 request.go:632] Waited for 194.37677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:01.508986  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:01.508991  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.509002  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.509007  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.512742  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.513256  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.513276  401591 pod_ready.go:82] duration metric: took 398.86838ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.513288  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.709917  401591 request.go:632] Waited for 196.548883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:10:01.710017  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:10:01.710026  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.710034  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.710039  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.714122  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:01.909434  401591 request.go:632] Waited for 194.3948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:10:01.909513  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:10:01.909522  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.909532  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.909540  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.913611  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:01.914046  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.914070  401591 pod_ready.go:82] duration metric: took 400.775584ms for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.914081  401591 pod_ready.go:39] duration metric: took 5.201089226s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:10:01.914096  401591 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:10:01.914154  401591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:10:01.933363  401591 api_server.go:72] duration metric: took 20.994747532s to wait for apiserver process to appear ...
	I1007 12:10:01.933396  401591 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:10:01.933418  401591 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:10:01.938101  401591 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:10:01.938189  401591 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:10:01.938198  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.938207  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.938213  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.939122  401591 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:10:01.939199  401591 api_server.go:141] control plane version: v1.31.1
	I1007 12:10:01.939214  401591 api_server.go:131] duration metric: took 5.812529ms to wait for apiserver health ...
	I1007 12:10:01.939225  401591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:10:02.109608  401591 request.go:632] Waited for 170.278268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.109688  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.109696  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.109710  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.109721  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.116583  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:02.124470  401591 system_pods.go:59] 24 kube-system pods found
	I1007 12:10:02.124519  401591 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:10:02.124524  401591 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:10:02.124528  401591 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:10:02.124532  401591 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:10:02.124537  401591 system_pods.go:61] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:10:02.124541  401591 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:10:02.124545  401591 system_pods.go:61] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:10:02.124549  401591 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:10:02.124553  401591 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:10:02.124556  401591 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:10:02.124559  401591 system_pods.go:61] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:10:02.124563  401591 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:10:02.124566  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:10:02.124569  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:10:02.124572  401591 system_pods.go:61] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:10:02.124576  401591 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:10:02.124579  401591 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:10:02.124582  401591 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:10:02.124585  401591 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:10:02.124588  401591 system_pods.go:61] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:10:02.124591  401591 system_pods.go:61] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:10:02.124594  401591 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:10:02.124597  401591 system_pods.go:61] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:10:02.124600  401591 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:10:02.124608  401591 system_pods.go:74] duration metric: took 185.374126ms to wait for pod list to return data ...
	I1007 12:10:02.124621  401591 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:10:02.309914  401591 request.go:632] Waited for 185.18335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:10:02.309989  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:10:02.309995  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.310010  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.310017  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.318042  401591 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:10:02.318207  401591 default_sa.go:45] found service account: "default"
	I1007 12:10:02.318235  401591 default_sa.go:55] duration metric: took 193.599365ms for default service account to be created ...
	I1007 12:10:02.318250  401591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:10:02.509774  401591 request.go:632] Waited for 191.420927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.509840  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.509853  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.509866  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.509875  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.516685  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:02.523464  401591 system_pods.go:86] 24 kube-system pods found
	I1007 12:10:02.523503  401591 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:10:02.523511  401591 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:10:02.523516  401591 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:10:02.523522  401591 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:10:02.523528  401591 system_pods.go:89] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:10:02.523534  401591 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:10:02.523539  401591 system_pods.go:89] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:10:02.523573  401591 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:10:02.523579  401591 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:10:02.523585  401591 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:10:02.523591  401591 system_pods.go:89] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:10:02.523606  401591 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:10:02.523613  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:10:02.523619  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:10:02.523628  401591 system_pods.go:89] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:10:02.523634  401591 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:10:02.523640  401591 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:10:02.523651  401591 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:10:02.523657  401591 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:10:02.523662  401591 system_pods.go:89] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:10:02.523668  401591 system_pods.go:89] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:10:02.523674  401591 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:10:02.523679  401591 system_pods.go:89] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:10:02.523685  401591 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:10:02.523697  401591 system_pods.go:126] duration metric: took 205.439551ms to wait for k8s-apps to be running ...
	I1007 12:10:02.523709  401591 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:10:02.523771  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:10:02.542038  401591 system_svc.go:56] duration metric: took 18.318301ms WaitForService to wait for kubelet
	I1007 12:10:02.542084  401591 kubeadm.go:582] duration metric: took 21.603472414s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:10:02.542109  401591 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:10:02.709771  401591 request.go:632] Waited for 167.539386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:10:02.709854  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:10:02.709863  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.709874  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.709884  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.713363  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:02.714361  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714384  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714396  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714401  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714406  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714409  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714415  401591 node_conditions.go:105] duration metric: took 172.299605ms to run NodePressure ...
	I1007 12:10:02.714430  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:10:02.714459  401591 start.go:255] writing updated cluster config ...
	I1007 12:10:02.714781  401591 ssh_runner.go:195] Run: rm -f paused
	I1007 12:10:02.769817  401591 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:10:02.771879  401591 out.go:177] * Done! kubectl is now configured to use "ha-628553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.359363356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303223359339590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8d69156-8375-441c-884b-9dcfd5220214 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.360164274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f133a1e-d502-4a69-9ff6-e3cc1ad519d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.360218688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f133a1e-d502-4a69-9ff6-e3cc1ad519d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.360471062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f133a1e-d502-4a69-9ff6-e3cc1ad519d6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.401222309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=418164f5-d120-47dd-a645-542580455c7e name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.401847004Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=418164f5-d120-47dd-a645-542580455c7e name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.404068007Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0cac9b2e-7ff8-486e-bc7f-e31719a7bd1f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.404501723Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303223404481948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0cac9b2e-7ff8-486e-bc7f-e31719a7bd1f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.407546772Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=146ecd1a-980c-417e-8353-1a39aeca32a9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.407763679Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=146ecd1a-980c-417e-8353-1a39aeca32a9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.408366031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=146ecd1a-980c-417e-8353-1a39aeca32a9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.449592136Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3a9b324-53b8-4890-bc10-7cb6c8ad3055 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.449667950Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3a9b324-53b8-4890-bc10-7cb6c8ad3055 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.450861381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=47e42263-86d9-44f7-8f23-1be97834907f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.451279215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303223451257467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47e42263-86d9-44f7-8f23-1be97834907f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.451886678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0aba0bc8-0713-40d5-a9cb-2f59f13d1934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.451962992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0aba0bc8-0713-40d5-a9cb-2f59f13d1934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.452211199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0aba0bc8-0713-40d5-a9cb-2f59f13d1934 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.494169969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d6674ad-1685-494a-9600-d5b469feca1b name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.494259736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d6674ad-1685-494a-9600-d5b469feca1b name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.495382610Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7208ff6-ea1d-4a81-a5f1-1d3535882f23 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.496095935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303223496070560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7208ff6-ea1d-4a81-a5f1-1d3535882f23 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.496702694Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83432ac7-debf-41b1-ad48-f9b9f06ca081 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.496824107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83432ac7-debf-41b1-ad48-f9b9f06ca081 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:43 ha-628553 crio[670]: time="2024-10-07 12:13:43.497059529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83432ac7-debf-41b1-ad48-f9b9f06ca081 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cac09519e9d83       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   3588af1ea926c       busybox-7dff88458-vc5k8
	914d5a55b5b7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   e4273414ae3c9       storage-provisioner
	4dcac83715ae5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   7a74be057c048       coredns-7c65d6cfc9-rsr6v
	0a438e52c0996       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   66f721a704d2d       coredns-7c65d6cfc9-ktmzq
	b10875321ed8d       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   883a1bf7435de       kindnet-snf5v
	4a0b203aaca5a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   4ad2a2a2eae50       kube-proxy-h6vg8
	41e1b6a866662       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   9107fefdb6eca       kube-vip-ha-628553
	02649d86a8d5c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   e611d474900bc       etcd-ha-628553
	1a3ce3a4cad16       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   adfc5c5b9565a       kube-scheduler-ha-628553
	73e39c7d2b39b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   ce8ef37c98c4f       kube-controller-manager-ha-628553
	919f5b2c17a09       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   923ba0f2be002       kube-apiserver-ha-628553
	
	
	==> coredns [0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68] <==
	[INFO] 10.244.1.2:59173 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004406792s
	[INFO] 10.244.1.2:44478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000424413s
	[INFO] 10.244.1.2:58960 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000183491s
	[INFO] 10.244.1.3:35630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000291506s
	[INFO] 10.244.1.3:42806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002399052s
	[INFO] 10.244.1.3:42397 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126644s
	[INFO] 10.244.1.3:34571 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001931949s
	[INFO] 10.244.1.3:54485 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000378487s
	[INFO] 10.244.1.3:58977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105091s
	[INFO] 10.244.0.4:38892 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002053345s
	[INFO] 10.244.0.4:58836 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172655s
	[INFO] 10.244.0.4:55251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065314s
	[INFO] 10.244.0.4:53436 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001570291s
	[INFO] 10.244.0.4:48063 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00004804s
	[INFO] 10.244.1.2:57025 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153957s
	[INFO] 10.244.1.2:40431 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012349s
	[INFO] 10.244.1.3:37153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139765s
	[INFO] 10.244.1.3:45214 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157416s
	[INFO] 10.244.1.3:47978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094264s
	[INFO] 10.244.0.4:57791 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080137s
	[INFO] 10.244.1.2:51888 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215918s
	[INFO] 10.244.1.2:42893 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166709s
	[INFO] 10.244.1.3:36056 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000172229s
	[INFO] 10.244.1.3:44744 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113708s
	[INFO] 10.244.0.4:56467 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102183s
	
	
	==> coredns [4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed] <==
	[INFO] 10.244.1.3:51613 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000585499s
	[INFO] 10.244.1.3:40629 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001993531s
	[INFO] 10.244.0.4:40285 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000080316s
	[INFO] 10.244.1.2:53385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200211s
	[INFO] 10.244.1.2:46841 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028903254s
	[INFO] 10.244.1.2:36156 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000295572s
	[INFO] 10.244.1.2:46979 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159813s
	[INFO] 10.244.1.3:47839 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190478s
	[INFO] 10.244.1.3:55618 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000314649s
	[INFO] 10.244.0.4:52728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150624s
	[INFO] 10.244.0.4:42394 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090784s
	[INFO] 10.244.0.4:57656 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107027s
	[INFO] 10.244.1.2:36030 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124775s
	[INFO] 10.244.1.2:57899 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082756s
	[INFO] 10.244.1.3:44889 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195326s
	[INFO] 10.244.0.4:59043 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137163s
	[INFO] 10.244.0.4:52080 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217774s
	[INFO] 10.244.0.4:40645 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102774s
	[INFO] 10.244.1.2:59521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150669s
	[INFO] 10.244.1.2:34929 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205398s
	[INFO] 10.244.1.3:50337 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185196s
	[INFO] 10.244.1.3:51645 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000242498s
	[INFO] 10.244.0.4:58847 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134448s
	[INFO] 10.244.0.4:51647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147028s
	[INFO] 10.244.0.4:54351 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131375s
	
	
	==> describe nodes <==
	Name:               ha-628553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_07_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-628553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a13f7b7982a74b9eb8f82488f9c3d1a6
	  System UUID:                a13f7b79-82a7-4b9e-b8f8-2488f9c3d1a6
	  Boot ID:                    288ea8ab-36c4-4d6a-9093-1f2ac800cc46
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vc5k8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 coredns-7c65d6cfc9-ktmzq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 coredns-7c65d6cfc9-rsr6v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m11s
	  kube-system                 etcd-ha-628553                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m15s
	  kube-system                 kindnet-snf5v                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m11s
	  kube-system                 kube-apiserver-ha-628553             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-628553    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-proxy-h6vg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-scheduler-ha-628553             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-628553                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m10s  kube-proxy       
	  Normal  Starting                 6m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m15s  kubelet          Node ha-628553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s  kubelet          Node ha-628553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s  kubelet          Node ha-628553 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m12s  node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  NodeReady                5m59s  kubelet          Node ha-628553 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  RegisteredNode           3m58s  node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	
	
	Name:               ha-628553-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_08_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:08:22 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:11:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-628553-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ba9ae7572f54f4ab8de307b6e86da52
	  System UUID:                4ba9ae75-72f5-4f4a-b8de-307b6e86da52
	  Boot ID:                    30fbb024-4877-4642-abd8-af8d3d30f079
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-75ng4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  default                     busybox-7dff88458-jhmrp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-628553-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-9rq2w                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-628553-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-controller-manager-ha-628553-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-proxy-s5c6d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-628553-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-vip-ha-628553-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m21s)  kubelet          Node ha-628553-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node ha-628553-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m21s)  kubelet          Node ha-628553-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-628553-m02 status is now: NodeNotReady
	
	
	Name:               ha-628553-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_09_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:09:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-628553-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aab92960db1b4070940c89c6ff930351
	  System UUID:                aab92960-db1b-4070-940c-89c6ff930351
	  Boot ID:                    77629bba-9229-47e7-80cf-730097c43666
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-628553-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m4s
	  kube-system                 kindnet-sb4xd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m7s
	  kube-system                 kube-apiserver-ha-628553-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-controller-manager-ha-628553-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-proxy-956k4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-scheduler-ha-628553-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-vip-ha-628553-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  RegisteredNode           4m7s                 node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node ha-628553-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	
	
	Name:               ha-628553-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_10_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:10:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:11:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    ha-628553-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7e249f18a3f466abcbb6b94b02ed2ec
	  System UUID:                b7e249f1-8a3f-466a-bcbb-6b94b02ed2ec
	  Boot ID:                    dd833219-3ee8-4ed9-aae9-d441f250fa96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwk2r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-fkzqr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m2s)  kubelet          Node ha-628553-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m2s)  kubelet          Node ha-628553-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m2s)  kubelet          Node ha-628553-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-628553-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 12:06] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051409] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040490] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.878273] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.715451] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Oct 7 12:07] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.378547] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.061855] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066201] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.180086] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.153013] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.284998] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.180207] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +4.207557] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.061569] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.415206] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.085223] kauditd_printk_skb: 79 callbacks suppressed
	[  +4.998659] kauditd_printk_skb: 26 callbacks suppressed
	[ +12.170600] kauditd_printk_skb: 33 callbacks suppressed
	[Oct 7 12:08] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969] <==
	{"level":"warn","ts":"2024-10-07T12:13:43.790546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.794760Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.799161Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.815622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.825858Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.833830Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.843619Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.845295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.846396Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.851698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.858666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.865980Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.873751Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.877957Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.881018Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.889748Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.889996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.896464Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.905966Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.912485Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.917316Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.926087Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.936375Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.944329Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:43.989882Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:13:44 up 6 min,  0 users,  load average: 0.56, 0.31, 0.15
	Linux ha-628553 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e] <==
	I1007 12:13:04.287265       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:14.286847       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:14.286893       1 main.go:299] handling current node
	I1007 12:13:14.286906       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:14.286913       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:14.287050       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:14.287071       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:14.287114       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:14.287136       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:24.295723       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:24.295884       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:24.296132       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:24.296167       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:24.296254       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:24.296275       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:24.296365       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:24.296384       1 main.go:299] handling current node
	I1007 12:13:34.285463       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:34.285588       1 main.go:299] handling current node
	I1007 12:13:34.285620       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:34.285640       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:34.285850       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:34.285880       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:34.285943       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:34.285960       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544] <==
	I1007 12:07:27.794940       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 12:07:27.933633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 12:07:32.075355       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1007 12:07:32.486677       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1007 12:08:23.102352       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.102586       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.764µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1007 12:08:23.104149       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.105567       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.106920       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.674679ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1007 12:10:08.360356       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40292: use of closed network connection
	E1007 12:10:08.561113       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40308: use of closed network connection
	E1007 12:10:08.787138       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40330: use of closed network connection
	E1007 12:10:09.028668       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40344: use of closed network connection
	E1007 12:10:09.244263       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40368: use of closed network connection
	E1007 12:10:09.466935       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40384: use of closed network connection
	E1007 12:10:09.660058       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40410: use of closed network connection
	E1007 12:10:09.852210       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40416: use of closed network connection
	E1007 12:10:10.061165       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40432: use of closed network connection
	E1007 12:10:10.408420       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40450: use of closed network connection
	E1007 12:10:10.612165       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40466: use of closed network connection
	E1007 12:10:10.805485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40472: use of closed network connection
	E1007 12:10:10.999177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40496: use of closed network connection
	E1007 12:10:11.210763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40502: use of closed network connection
	E1007 12:10:11.463496       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40532: use of closed network connection
	W1007 12:11:36.878261       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110 192.168.39.149]
	
	
	==> kube-controller-manager [73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee] <==
	I1007 12:10:41.965922       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.001526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.152486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.245459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.660674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:45.679644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:45.726419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:46.774324       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-628553-m04"
	I1007 12:10:46.775093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:46.796998       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:52.359490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:01.889908       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-628553-m04"
	I1007 12:11:01.891629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:01.908947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:02.079930       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:12.784052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:56.797865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-628553-m04"
	I1007 12:11:56.798196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:11:56.825210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:11:56.976985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.040351ms"
	I1007 12:11:56.977093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.478µs"
	I1007 12:11:57.005615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.252446ms"
	I1007 12:11:57.005705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.783µs"
	I1007 12:12:00.745939       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:12:02.094451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	
	
	==> kube-proxy [4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 12:07:33.298365       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 12:07:33.336456       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E1007 12:07:33.336571       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:07:33.434284       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 12:07:33.434331       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 12:07:33.434355       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:07:33.445592       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:07:33.454423       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:07:33.454444       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:07:33.463602       1 config.go:199] "Starting service config controller"
	I1007 12:07:33.467216       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:07:33.467268       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:07:33.467274       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:07:33.472850       1 config.go:328] "Starting node config controller"
	I1007 12:07:33.472863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:07:33.568004       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 12:07:33.568062       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:07:33.573613       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4] <==
	E1007 12:07:26.382246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:07:26.387024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 12:07:26.387119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:07:26.410415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 12:07:26.410570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 12:07:27.604975       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 12:10:03.714499       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="38d0a2a6-0d77-403c-86e7-405837d8ca25" pod="default/busybox-7dff88458-jhmrp" assumedNode="ha-628553-m02" currentNode="ha-628553-m03"
	E1007 12:10:03.740391       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jhmrp\": pod busybox-7dff88458-jhmrp is already assigned to node \"ha-628553-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jhmrp" node="ha-628553-m03"
	E1007 12:10:03.743143       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 38d0a2a6-0d77-403c-86e7-405837d8ca25(default/busybox-7dff88458-jhmrp) was assumed on ha-628553-m03 but assigned to ha-628553-m02" pod="default/busybox-7dff88458-jhmrp"
	E1007 12:10:03.745165       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jhmrp\": pod busybox-7dff88458-jhmrp is already assigned to node \"ha-628553-m02\"" pod="default/busybox-7dff88458-jhmrp"
	I1007 12:10:03.747831       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jhmrp" node="ha-628553-m02"
	E1007 12:10:03.791061       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vc5k8\": pod busybox-7dff88458-vc5k8 is already assigned to node \"ha-628553\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vc5k8" node="ha-628553-m03"
	E1007 12:10:03.791192       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vc5k8\": pod busybox-7dff88458-vc5k8 is already assigned to node \"ha-628553\"" pod="default/busybox-7dff88458-vc5k8"
	E1007 12:10:03.910449       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-47zsz\": pod busybox-7dff88458-47zsz is already assigned to node \"ha-628553-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-47zsz" node="ha-628553-m03"
	E1007 12:10:03.910515       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 674a626e-9fe6-4875-a34f-cc4d729e2bb1(default/busybox-7dff88458-47zsz) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-47zsz"
	E1007 12:10:03.910531       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-47zsz\": pod busybox-7dff88458-47zsz is already assigned to node \"ha-628553-m03\"" pod="default/busybox-7dff88458-47zsz"
	I1007 12:10:03.910555       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-47zsz" node="ha-628553-m03"
	E1007 12:10:42.040635       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwk2r\": pod kindnet-rwk2r is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwk2r" node="ha-628553-m04"
	E1007 12:10:42.042987       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwk2r\": pod kindnet-rwk2r is already assigned to node \"ha-628553-m04\"" pod="kube-system/kindnet-rwk2r"
	E1007 12:10:42.079633       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kl4j4\": pod kindnet-kl4j4 is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kl4j4" node="ha-628553-m04"
	E1007 12:10:42.079724       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 244c4da8-46b7-4627-a7ad-60e7ff405b0a(kube-system/kindnet-kl4j4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kl4j4"
	E1007 12:10:42.079846       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kl4j4\": pod kindnet-kl4j4 is already assigned to node \"ha-628553-m04\"" pod="kube-system/kindnet-kl4j4"
	I1007 12:10:42.079871       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kl4j4" node="ha-628553-m04"
	E1007 12:10:42.086167       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-g2fwp\": pod kube-proxy-g2fwp is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-g2fwp" node="ha-628553-m04"
	E1007 12:10:42.086272       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-g2fwp\": pod kube-proxy-g2fwp is already assigned to node \"ha-628553-m04\"" pod="kube-system/kube-proxy-g2fwp"
	
	
	==> kubelet <==
	Oct 07 12:12:27 ha-628553 kubelet[1314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:12:27 ha-628553 kubelet[1314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:12:27 ha-628553 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:12:27 ha-628553 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:12:28 ha-628553 kubelet[1314]: E1007 12:12:28.044744    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303148044534034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:28 ha-628553 kubelet[1314]: E1007 12:12:28.044838    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303148044534034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:38 ha-628553 kubelet[1314]: E1007 12:12:38.050523    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303158047005260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:38 ha-628553 kubelet[1314]: E1007 12:12:38.051561    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303158047005260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:48 ha-628553 kubelet[1314]: E1007 12:12:48.053900    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303168053449361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:48 ha-628553 kubelet[1314]: E1007 12:12:48.053963    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303168053449361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:58 ha-628553 kubelet[1314]: E1007 12:12:58.055856    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303178055537621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:58 ha-628553 kubelet[1314]: E1007 12:12:58.055895    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303178055537621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:08 ha-628553 kubelet[1314]: E1007 12:13:08.057102    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303188056723208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:08 ha-628553 kubelet[1314]: E1007 12:13:08.057351    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303188056723208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:18 ha-628553 kubelet[1314]: E1007 12:13:18.061478    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303198060609364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:18 ha-628553 kubelet[1314]: E1007 12:13:18.061853    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303198060609364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:27 ha-628553 kubelet[1314]: E1007 12:13:27.990111    1314 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:13:27 ha-628553 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:13:28 ha-628553 kubelet[1314]: E1007 12:13:28.063998    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303208063333958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:28 ha-628553 kubelet[1314]: E1007 12:13:28.064098    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303208063333958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:38 ha-628553 kubelet[1314]: E1007 12:13:38.066580    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303218065435839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:38 ha-628553 kubelet[1314]: E1007 12:13:38.066632    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303218065435839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-628553 -n ha-628553
helpers_test.go:261: (dbg) Run:  kubectl --context ha-628553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.405319294s)
ha_test.go:415: expected profile "ha-628553" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-628553\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-628553\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-628553\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.110\",\"Port\":8443,\"Kub
ernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.169\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.149\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.119\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false
,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSiz
e\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-628553 -n ha-628553
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-628553 logs -n 25: (1.44026237s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553:/home/docker/cp-test_ha-628553-m03_ha-628553.txt                       |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553 sudo cat                                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553.txt                                 |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m02:/home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m04 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp testdata/cp-test.txt                                                | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553:/home/docker/cp-test_ha-628553-m04_ha-628553.txt                       |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553 sudo cat                                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553.txt                                 |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m02:/home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03:/home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m03 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-628553 node stop m02 -v=7                                                     | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:06:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:06:46.248953  401591 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:06:46.249102  401591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:46.249113  401591 out.go:358] Setting ErrFile to fd 2...
	I1007 12:06:46.249117  401591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:46.249326  401591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:06:46.249966  401591 out.go:352] Setting JSON to false
	I1007 12:06:46.250938  401591 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6552,"bootTime":1728296254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:06:46.251073  401591 start.go:139] virtualization: kvm guest
	I1007 12:06:46.253469  401591 out.go:177] * [ha-628553] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:06:46.255142  401591 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:06:46.255180  401591 notify.go:220] Checking for updates...
	I1007 12:06:46.257412  401591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:06:46.258630  401591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:06:46.259784  401591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.261129  401591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:06:46.262379  401591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:06:46.263655  401591 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:06:46.300943  401591 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 12:06:46.302472  401591 start.go:297] selected driver: kvm2
	I1007 12:06:46.302493  401591 start.go:901] validating driver "kvm2" against <nil>
	I1007 12:06:46.302513  401591 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:06:46.303566  401591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:06:46.303697  401591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:06:46.319358  401591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:06:46.319408  401591 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:06:46.319656  401591 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:06:46.319692  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:06:46.319741  401591 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 12:06:46.319766  401591 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 12:06:46.319825  401591 start.go:340] cluster config:
	{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 12:06:46.319936  401591 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:06:46.321805  401591 out.go:177] * Starting "ha-628553" primary control-plane node in "ha-628553" cluster
	I1007 12:06:46.323163  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:06:46.323208  401591 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:06:46.323219  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:06:46.323305  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:06:46.323316  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:06:46.323679  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:06:46.323704  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json: {Name:mk2a07965de558fa93dada604e58b87e56b9c04c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:06:46.323847  401591 start.go:360] acquireMachinesLock for ha-628553: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:06:46.323875  401591 start.go:364] duration metric: took 15.967µs to acquireMachinesLock for "ha-628553"
	I1007 12:06:46.323891  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:06:46.323965  401591 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 12:06:46.325764  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:06:46.325922  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:06:46.325971  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:06:46.341278  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I1007 12:06:46.341788  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:06:46.342304  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:06:46.342327  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:06:46.342728  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:06:46.342902  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:06:46.343093  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:06:46.343232  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:06:46.343262  401591 client.go:168] LocalClient.Create starting
	I1007 12:06:46.343300  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:06:46.343339  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:06:46.343361  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:06:46.343431  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:06:46.343449  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:06:46.343461  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:06:46.343477  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:06:46.343525  401591 main.go:141] libmachine: (ha-628553) Calling .PreCreateCheck
	I1007 12:06:46.343857  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:06:46.344200  401591 main.go:141] libmachine: Creating machine...
	I1007 12:06:46.344213  401591 main.go:141] libmachine: (ha-628553) Calling .Create
	I1007 12:06:46.344334  401591 main.go:141] libmachine: (ha-628553) Creating KVM machine...
	I1007 12:06:46.345527  401591 main.go:141] libmachine: (ha-628553) DBG | found existing default KVM network
	I1007 12:06:46.346242  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.346122  401614 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
	I1007 12:06:46.346346  401591 main.go:141] libmachine: (ha-628553) DBG | created network xml: 
	I1007 12:06:46.346370  401591 main.go:141] libmachine: (ha-628553) DBG | <network>
	I1007 12:06:46.346380  401591 main.go:141] libmachine: (ha-628553) DBG |   <name>mk-ha-628553</name>
	I1007 12:06:46.346391  401591 main.go:141] libmachine: (ha-628553) DBG |   <dns enable='no'/>
	I1007 12:06:46.346402  401591 main.go:141] libmachine: (ha-628553) DBG |   
	I1007 12:06:46.346407  401591 main.go:141] libmachine: (ha-628553) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 12:06:46.346415  401591 main.go:141] libmachine: (ha-628553) DBG |     <dhcp>
	I1007 12:06:46.346420  401591 main.go:141] libmachine: (ha-628553) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 12:06:46.346428  401591 main.go:141] libmachine: (ha-628553) DBG |     </dhcp>
	I1007 12:06:46.346439  401591 main.go:141] libmachine: (ha-628553) DBG |   </ip>
	I1007 12:06:46.346452  401591 main.go:141] libmachine: (ha-628553) DBG |   
	I1007 12:06:46.346459  401591 main.go:141] libmachine: (ha-628553) DBG | </network>
	I1007 12:06:46.346484  401591 main.go:141] libmachine: (ha-628553) DBG | 
	I1007 12:06:46.351921  401591 main.go:141] libmachine: (ha-628553) DBG | trying to create private KVM network mk-ha-628553 192.168.39.0/24...
	I1007 12:06:46.427414  401591 main.go:141] libmachine: (ha-628553) DBG | private KVM network mk-ha-628553 192.168.39.0/24 created
	I1007 12:06:46.427467  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.427375  401614 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.427482  401591 main.go:141] libmachine: (ha-628553) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 ...
	I1007 12:06:46.427511  401591 main.go:141] libmachine: (ha-628553) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:06:46.427534  401591 main.go:141] libmachine: (ha-628553) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:06:46.734984  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.734782  401614 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa...
	I1007 12:06:46.872452  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.872289  401614 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/ha-628553.rawdisk...
	I1007 12:06:46.872482  401591 main.go:141] libmachine: (ha-628553) DBG | Writing magic tar header
	I1007 12:06:46.872494  401591 main.go:141] libmachine: (ha-628553) DBG | Writing SSH key tar header
	I1007 12:06:46.872500  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.872414  401614 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 ...
	I1007 12:06:46.872528  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553
	I1007 12:06:46.872550  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 (perms=drwx------)
	I1007 12:06:46.872558  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:06:46.872571  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:06:46.872585  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.872599  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:06:46.872642  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:06:46.872667  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:06:46.872679  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:06:46.872704  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:06:46.872718  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home
	I1007 12:06:46.872731  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:06:46.872746  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:06:46.872756  401591 main.go:141] libmachine: (ha-628553) Creating domain...
	I1007 12:06:46.872770  401591 main.go:141] libmachine: (ha-628553) DBG | Skipping /home - not owner
	I1007 12:06:46.873981  401591 main.go:141] libmachine: (ha-628553) define libvirt domain using xml: 
	I1007 12:06:46.874013  401591 main.go:141] libmachine: (ha-628553) <domain type='kvm'>
	I1007 12:06:46.874020  401591 main.go:141] libmachine: (ha-628553)   <name>ha-628553</name>
	I1007 12:06:46.874024  401591 main.go:141] libmachine: (ha-628553)   <memory unit='MiB'>2200</memory>
	I1007 12:06:46.874029  401591 main.go:141] libmachine: (ha-628553)   <vcpu>2</vcpu>
	I1007 12:06:46.874033  401591 main.go:141] libmachine: (ha-628553)   <features>
	I1007 12:06:46.874038  401591 main.go:141] libmachine: (ha-628553)     <acpi/>
	I1007 12:06:46.874041  401591 main.go:141] libmachine: (ha-628553)     <apic/>
	I1007 12:06:46.874076  401591 main.go:141] libmachine: (ha-628553)     <pae/>
	I1007 12:06:46.874106  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874128  401591 main.go:141] libmachine: (ha-628553)   </features>
	I1007 12:06:46.874148  401591 main.go:141] libmachine: (ha-628553)   <cpu mode='host-passthrough'>
	I1007 12:06:46.874160  401591 main.go:141] libmachine: (ha-628553)   
	I1007 12:06:46.874169  401591 main.go:141] libmachine: (ha-628553)   </cpu>
	I1007 12:06:46.874177  401591 main.go:141] libmachine: (ha-628553)   <os>
	I1007 12:06:46.874184  401591 main.go:141] libmachine: (ha-628553)     <type>hvm</type>
	I1007 12:06:46.874189  401591 main.go:141] libmachine: (ha-628553)     <boot dev='cdrom'/>
	I1007 12:06:46.874195  401591 main.go:141] libmachine: (ha-628553)     <boot dev='hd'/>
	I1007 12:06:46.874201  401591 main.go:141] libmachine: (ha-628553)     <bootmenu enable='no'/>
	I1007 12:06:46.874209  401591 main.go:141] libmachine: (ha-628553)   </os>
	I1007 12:06:46.874217  401591 main.go:141] libmachine: (ha-628553)   <devices>
	I1007 12:06:46.874227  401591 main.go:141] libmachine: (ha-628553)     <disk type='file' device='cdrom'>
	I1007 12:06:46.874240  401591 main.go:141] libmachine: (ha-628553)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/boot2docker.iso'/>
	I1007 12:06:46.874254  401591 main.go:141] libmachine: (ha-628553)       <target dev='hdc' bus='scsi'/>
	I1007 12:06:46.874286  401591 main.go:141] libmachine: (ha-628553)       <readonly/>
	I1007 12:06:46.874302  401591 main.go:141] libmachine: (ha-628553)     </disk>
	I1007 12:06:46.874308  401591 main.go:141] libmachine: (ha-628553)     <disk type='file' device='disk'>
	I1007 12:06:46.874314  401591 main.go:141] libmachine: (ha-628553)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:06:46.874328  401591 main.go:141] libmachine: (ha-628553)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/ha-628553.rawdisk'/>
	I1007 12:06:46.874335  401591 main.go:141] libmachine: (ha-628553)       <target dev='hda' bus='virtio'/>
	I1007 12:06:46.874340  401591 main.go:141] libmachine: (ha-628553)     </disk>
	I1007 12:06:46.874346  401591 main.go:141] libmachine: (ha-628553)     <interface type='network'>
	I1007 12:06:46.874352  401591 main.go:141] libmachine: (ha-628553)       <source network='mk-ha-628553'/>
	I1007 12:06:46.874358  401591 main.go:141] libmachine: (ha-628553)       <model type='virtio'/>
	I1007 12:06:46.874363  401591 main.go:141] libmachine: (ha-628553)     </interface>
	I1007 12:06:46.874369  401591 main.go:141] libmachine: (ha-628553)     <interface type='network'>
	I1007 12:06:46.874375  401591 main.go:141] libmachine: (ha-628553)       <source network='default'/>
	I1007 12:06:46.874381  401591 main.go:141] libmachine: (ha-628553)       <model type='virtio'/>
	I1007 12:06:46.874386  401591 main.go:141] libmachine: (ha-628553)     </interface>
	I1007 12:06:46.874395  401591 main.go:141] libmachine: (ha-628553)     <serial type='pty'>
	I1007 12:06:46.874400  401591 main.go:141] libmachine: (ha-628553)       <target port='0'/>
	I1007 12:06:46.874409  401591 main.go:141] libmachine: (ha-628553)     </serial>
	I1007 12:06:46.874429  401591 main.go:141] libmachine: (ha-628553)     <console type='pty'>
	I1007 12:06:46.874446  401591 main.go:141] libmachine: (ha-628553)       <target type='serial' port='0'/>
	I1007 12:06:46.874474  401591 main.go:141] libmachine: (ha-628553)     </console>
	I1007 12:06:46.874484  401591 main.go:141] libmachine: (ha-628553)     <rng model='virtio'>
	I1007 12:06:46.874505  401591 main.go:141] libmachine: (ha-628553)       <backend model='random'>/dev/random</backend>
	I1007 12:06:46.874515  401591 main.go:141] libmachine: (ha-628553)     </rng>
	I1007 12:06:46.874526  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874539  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874559  401591 main.go:141] libmachine: (ha-628553)   </devices>
	I1007 12:06:46.874569  401591 main.go:141] libmachine: (ha-628553) </domain>
	I1007 12:06:46.874620  401591 main.go:141] libmachine: (ha-628553) 
	I1007 12:06:46.879724  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:6a:a7:e1 in network default
	I1007 12:06:46.880361  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:46.880382  401591 main.go:141] libmachine: (ha-628553) Ensuring networks are active...
	I1007 12:06:46.881257  401591 main.go:141] libmachine: (ha-628553) Ensuring network default is active
	I1007 12:06:46.881675  401591 main.go:141] libmachine: (ha-628553) Ensuring network mk-ha-628553 is active
	I1007 12:06:46.882336  401591 main.go:141] libmachine: (ha-628553) Getting domain xml...
	I1007 12:06:46.883247  401591 main.go:141] libmachine: (ha-628553) Creating domain...
	I1007 12:06:48.123283  401591 main.go:141] libmachine: (ha-628553) Waiting to get IP...
	I1007 12:06:48.124056  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.124511  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.124563  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.124510  401614 retry.go:31] will retry after 252.804778ms: waiting for machine to come up
	I1007 12:06:48.379035  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.379469  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.379489  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.379438  401614 retry.go:31] will retry after 356.807953ms: waiting for machine to come up
	I1007 12:06:48.738267  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.738722  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.738745  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.738688  401614 retry.go:31] will retry after 447.95167ms: waiting for machine to come up
	I1007 12:06:49.188519  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:49.188950  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:49.189019  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:49.188950  401614 retry.go:31] will retry after 486.200273ms: waiting for machine to come up
	I1007 12:06:49.676646  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:49.677063  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:49.677096  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:49.677017  401614 retry.go:31] will retry after 751.80427ms: waiting for machine to come up
	I1007 12:06:50.430789  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:50.431237  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:50.431260  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:50.431198  401614 retry.go:31] will retry after 897.786106ms: waiting for machine to come up
	I1007 12:06:51.330467  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:51.330831  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:51.330901  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:51.330836  401614 retry.go:31] will retry after 793.545437ms: waiting for machine to come up
	I1007 12:06:52.125725  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:52.126243  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:52.126280  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:52.126156  401614 retry.go:31] will retry after 986.036634ms: waiting for machine to come up
	I1007 12:06:53.113559  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:53.113953  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:53.113997  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:53.113901  401614 retry.go:31] will retry after 1.340335374s: waiting for machine to come up
	I1007 12:06:54.456245  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:54.456708  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:54.456732  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:54.456674  401614 retry.go:31] will retry after 1.447575739s: waiting for machine to come up
	I1007 12:06:55.906303  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:55.906806  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:55.906840  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:55.906747  401614 retry.go:31] will retry after 2.291446715s: waiting for machine to come up
	I1007 12:06:58.200323  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:58.200867  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:58.200896  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:58.200813  401614 retry.go:31] will retry after 2.450660794s: waiting for machine to come up
	I1007 12:07:00.654450  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:00.655019  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:07:00.655050  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:07:00.654943  401614 retry.go:31] will retry after 4.454613315s: waiting for machine to come up
	I1007 12:07:05.114240  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:05.114649  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:07:05.114678  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:07:05.114610  401614 retry.go:31] will retry after 4.13354174s: waiting for machine to come up
	I1007 12:07:09.251818  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.252270  401591 main.go:141] libmachine: (ha-628553) Found IP for machine: 192.168.39.110
	I1007 12:07:09.252297  401591 main.go:141] libmachine: (ha-628553) Reserving static IP address...
	I1007 12:07:09.252306  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has current primary IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.252723  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find host DHCP lease matching {name: "ha-628553", mac: "52:54:00:7b:12:fd", ip: "192.168.39.110"} in network mk-ha-628553
	I1007 12:07:09.328075  401591 main.go:141] libmachine: (ha-628553) DBG | Getting to WaitForSSH function...
	I1007 12:07:09.328108  401591 main.go:141] libmachine: (ha-628553) Reserved static IP address: 192.168.39.110
	I1007 12:07:09.328119  401591 main.go:141] libmachine: (ha-628553) Waiting for SSH to be available...
	I1007 12:07:09.330775  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.331429  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.331468  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.331645  401591 main.go:141] libmachine: (ha-628553) DBG | Using SSH client type: external
	I1007 12:07:09.331670  401591 main.go:141] libmachine: (ha-628553) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa (-rw-------)
	I1007 12:07:09.331710  401591 main.go:141] libmachine: (ha-628553) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:07:09.331724  401591 main.go:141] libmachine: (ha-628553) DBG | About to run SSH command:
	I1007 12:07:09.331736  401591 main.go:141] libmachine: (ha-628553) DBG | exit 0
	I1007 12:07:09.455242  401591 main.go:141] libmachine: (ha-628553) DBG | SSH cmd err, output: <nil>: 
	I1007 12:07:09.455632  401591 main.go:141] libmachine: (ha-628553) KVM machine creation complete!
	I1007 12:07:09.455937  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:07:09.456561  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:09.456802  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:09.457023  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:07:09.457043  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:09.458370  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:07:09.458386  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:07:09.458404  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:07:09.458413  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.460807  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.461171  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.461207  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.461300  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.461468  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.461645  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.461780  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.461919  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.462158  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.462173  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:07:09.562645  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:09.562687  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:07:09.562725  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.565649  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.565971  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.566008  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.566176  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.566388  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.566561  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.566676  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.566830  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.567082  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.567099  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:07:09.667847  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:07:09.667941  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:07:09.667948  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:07:09.667957  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.668229  401591 buildroot.go:166] provisioning hostname "ha-628553"
	I1007 12:07:09.668263  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.668471  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.671034  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.671389  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.671427  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.671579  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.671743  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.671923  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.672060  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.672217  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.672404  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.672417  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553 && echo "ha-628553" | sudo tee /etc/hostname
	I1007 12:07:09.786631  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553
	
	I1007 12:07:09.786665  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.789427  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.789744  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.789774  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.789989  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.790273  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.790426  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.790549  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.790707  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.790919  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.790942  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:07:09.900194  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:09.900232  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:07:09.900296  401591 buildroot.go:174] setting up certificates
	I1007 12:07:09.900321  401591 provision.go:84] configureAuth start
	I1007 12:07:09.900343  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.900684  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:09.903579  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.904022  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.904048  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.904222  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.906311  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.906630  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.906658  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.906830  401591 provision.go:143] copyHostCerts
	I1007 12:07:09.906874  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:09.906920  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:07:09.906937  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:09.907109  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:07:09.907203  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:09.907224  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:07:09.907232  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:09.907258  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:07:09.907319  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:09.907341  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:07:09.907348  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:09.907368  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:07:09.907427  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553 san=[127.0.0.1 192.168.39.110 ha-628553 localhost minikube]
	I1007 12:07:09.982701  401591 provision.go:177] copyRemoteCerts
	I1007 12:07:09.982771  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:07:09.982796  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.985547  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.985859  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.985888  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.986044  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.986244  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.986399  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.986506  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.070065  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:07:10.070156  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:07:10.096714  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:07:10.096790  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 12:07:10.123505  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:07:10.123591  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:07:10.149487  401591 provision.go:87] duration metric: took 249.146606ms to configureAuth
	I1007 12:07:10.149524  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:07:10.149723  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:10.149836  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.152585  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.152880  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.152910  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.153069  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.153241  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.153400  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.153553  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.153691  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:10.153888  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:10.153903  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:07:10.373356  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:07:10.373398  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:07:10.373429  401591 main.go:141] libmachine: (ha-628553) Calling .GetURL
	I1007 12:07:10.374673  401591 main.go:141] libmachine: (ha-628553) DBG | Using libvirt version 6000000
	I1007 12:07:10.376989  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.377347  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.377371  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.377519  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:07:10.377531  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:07:10.377548  401591 client.go:171] duration metric: took 24.034266127s to LocalClient.Create
	I1007 12:07:10.377571  401591 start.go:167] duration metric: took 24.034341329s to libmachine.API.Create "ha-628553"
	I1007 12:07:10.377581  401591 start.go:293] postStartSetup for "ha-628553" (driver="kvm2")
	I1007 12:07:10.377593  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:07:10.377610  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.377871  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:07:10.377899  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.380000  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.380320  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.380343  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.380475  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.380648  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.380799  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.380960  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.461919  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:07:10.466913  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:07:10.466951  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:07:10.467055  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:07:10.467179  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:07:10.467195  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:07:10.467315  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:07:10.478269  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:10.503960  401591 start.go:296] duration metric: took 126.358927ms for postStartSetup
	I1007 12:07:10.504030  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:07:10.504699  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:10.507315  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.507612  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.507660  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.507956  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:10.508187  401591 start.go:128] duration metric: took 24.184210305s to createHost
	I1007 12:07:10.508226  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.510480  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.510789  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.510822  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.511033  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.511256  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.511415  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.511573  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.511733  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:10.511905  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:10.511924  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:07:10.611827  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302830.585700119
	
	I1007 12:07:10.611860  401591 fix.go:216] guest clock: 1728302830.585700119
	I1007 12:07:10.611870  401591 fix.go:229] Guest: 2024-10-07 12:07:10.585700119 +0000 UTC Remote: 2024-10-07 12:07:10.508202357 +0000 UTC m=+24.300236101 (delta=77.497762ms)
	I1007 12:07:10.611911  401591 fix.go:200] guest clock delta is within tolerance: 77.497762ms
	I1007 12:07:10.611917  401591 start.go:83] releasing machines lock for "ha-628553", held for 24.288033555s
	I1007 12:07:10.611944  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.612216  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:10.614566  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.614868  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.614895  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.615083  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.615721  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.615950  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.616059  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:07:10.616101  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.616157  401591 ssh_runner.go:195] Run: cat /version.json
	I1007 12:07:10.616184  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.618780  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.618978  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619174  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.619193  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619348  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.619390  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619659  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.619672  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.619840  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.619847  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.620016  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.620024  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.620177  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.620181  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.718502  401591 ssh_runner.go:195] Run: systemctl --version
	I1007 12:07:10.724799  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:07:10.886272  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:07:10.893483  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:07:10.893578  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:07:10.909850  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:07:10.909880  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:07:10.909961  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:07:10.926247  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:07:10.941251  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:07:10.941339  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:07:10.955771  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:07:10.969831  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:07:11.084350  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:07:11.233191  401591 docker.go:233] disabling docker service ...
	I1007 12:07:11.233261  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:07:11.257607  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:07:11.272121  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:07:11.404315  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:07:11.544026  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:07:11.559395  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:07:11.580516  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:07:11.580580  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.592830  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:07:11.592905  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.604197  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.615375  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.626652  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:07:11.638161  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.649289  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.668010  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.679654  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:07:11.690371  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:07:11.690448  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:07:11.704718  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:07:11.715762  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:11.825411  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:07:11.918378  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:07:11.918470  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:07:11.923527  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:07:11.923612  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:07:11.927764  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:07:11.977811  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:07:11.977922  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:07:12.007918  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:07:12.039043  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:07:12.040655  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:12.043258  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:12.043618  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:12.043660  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:12.043867  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:07:12.048464  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:07:12.062293  401591 kubeadm.go:883] updating cluster {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:07:12.062486  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:07:12.062597  401591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:07:12.097470  401591 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:07:12.097555  401591 ssh_runner.go:195] Run: which lz4
	I1007 12:07:12.101992  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 12:07:12.102107  401591 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:07:12.106769  401591 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:07:12.106815  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:07:13.549777  401591 crio.go:462] duration metric: took 1.447693523s to copy over tarball
	I1007 12:07:13.549867  401591 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:07:15.620966  401591 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.071058726s)
	I1007 12:07:15.621003  401591 crio.go:469] duration metric: took 2.071194203s to extract the tarball
	I1007 12:07:15.621015  401591 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:07:15.659036  401591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:07:15.704438  401591 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:07:15.704468  401591 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:07:15.704477  401591 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.31.1 crio true true} ...
	I1007 12:07:15.704607  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:07:15.704694  401591 ssh_runner.go:195] Run: crio config
	I1007 12:07:15.754734  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:07:15.754757  401591 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:07:15.754770  401591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:07:15.754796  401591 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-628553 NodeName:ha-628553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:07:15.754985  401591 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-628553"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:07:15.755023  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:07:15.755081  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:07:15.772386  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:07:15.772511  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:07:15.772569  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:07:15.783117  401591 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:07:15.783206  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:07:15.793430  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:07:15.811520  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:07:15.829402  401591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:07:15.846802  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 12:07:15.864215  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:07:15.868441  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:07:15.881667  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:16.004989  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:07:16.023767  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.110
	I1007 12:07:16.023798  401591 certs.go:194] generating shared ca certs ...
	I1007 12:07:16.023817  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.023995  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:07:16.024043  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:07:16.024055  401591 certs.go:256] generating profile certs ...
	I1007 12:07:16.024128  401591 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:07:16.024144  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt with IP's: []
	I1007 12:07:16.480073  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt ...
	I1007 12:07:16.480107  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt: {Name:mkfb027cfd899ceeb19712c80d47ef46bbe4c190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.480291  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key ...
	I1007 12:07:16.480303  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key: {Name:mk472c4daf268a3e203f7108e0ee108260fa3747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.480379  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105
	I1007 12:07:16.480394  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I1007 12:07:16.560831  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 ...
	I1007 12:07:16.560865  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105: {Name:mkda56599207690099e4c299c085dc0644ef658a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.561026  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105 ...
	I1007 12:07:16.561038  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105: {Name:mk95b3f2a966eb67f31cfddf5b506b130fe9bd62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.561111  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:07:16.561219  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:07:16.561278  401591 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:07:16.561293  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt with IP's: []
	I1007 12:07:16.724627  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt ...
	I1007 12:07:16.724663  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt: {Name:mka4b333091a10b550ae6d13ed243d08adf6256b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.724831  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key ...
	I1007 12:07:16.724852  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key: {Name:mk6b2bcdf33ba7c4b6b9286fdc19a9d76a966caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.724932  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:07:16.724949  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:07:16.724963  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:07:16.724977  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:07:16.724990  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:07:16.725004  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:07:16.725016  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:07:16.725028  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:07:16.725075  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:07:16.725108  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:07:16.725118  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:07:16.725153  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:07:16.725179  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:07:16.725216  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:07:16.725253  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:16.725329  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:07:16.725350  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:07:16.725362  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:16.726018  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:07:16.753427  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:07:16.781404  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:07:16.817294  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:07:16.847559  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:07:16.873440  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:07:16.900479  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:07:16.927096  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:07:16.955843  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:07:16.983339  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:07:17.013360  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:07:17.041294  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:07:17.061373  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:07:17.067955  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:07:17.081953  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.087146  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.087222  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.094009  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:07:17.108332  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:07:17.122877  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.128622  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.128708  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.136010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:07:17.150544  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:07:17.165028  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.170897  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.170982  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.177949  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:07:17.192554  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:07:17.197582  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:07:17.197639  401591 kubeadm.go:392] StartCluster: {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:07:17.197720  401591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:07:17.197783  401591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:07:17.244966  401591 cri.go:89] found id: ""
	I1007 12:07:17.245041  401591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:07:17.257993  401591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:07:17.270516  401591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:07:17.282873  401591 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:07:17.282897  401591 kubeadm.go:157] found existing configuration files:
	
	I1007 12:07:17.282953  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:07:17.293921  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:07:17.294014  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:07:17.305489  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:07:17.315800  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:07:17.315863  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:07:17.326391  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:07:17.336609  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:07:17.336691  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:07:17.347761  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:07:17.358288  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:07:17.358369  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:07:17.369688  401591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 12:07:17.494169  401591 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 12:07:17.494284  401591 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 12:07:17.626708  401591 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 12:07:17.626813  401591 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 12:07:17.626906  401591 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 12:07:17.639261  401591 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 12:07:17.853154  401591 out.go:235]   - Generating certificates and keys ...
	I1007 12:07:17.853313  401591 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 12:07:17.853396  401591 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 12:07:17.853510  401591 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 12:07:17.853594  401591 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 12:07:18.070639  401591 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 12:07:18.133955  401591 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 12:07:18.493727  401591 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 12:07:18.493854  401591 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-628553 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1007 12:07:18.624521  401591 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 12:07:18.624725  401591 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-628553 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1007 12:07:18.772457  401591 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 12:07:19.133450  401591 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 12:07:19.279063  401591 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 12:07:19.279188  401591 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 12:07:19.348410  401591 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 12:07:19.574804  401591 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 12:07:19.645430  401591 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 12:07:19.894630  401591 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 12:07:20.065666  401591 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 12:07:20.066298  401591 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 12:07:20.071555  401591 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 12:07:20.073562  401591 out.go:235]   - Booting up control plane ...
	I1007 12:07:20.073670  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 12:07:20.073742  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 12:07:20.073803  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 12:07:20.089334  401591 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 12:07:20.096504  401591 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 12:07:20.096582  401591 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 12:07:20.238757  401591 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 12:07:20.238922  401591 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 12:07:21.247383  401591 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.007919898s
	I1007 12:07:21.247485  401591 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 12:07:26.913696  401591 kubeadm.go:310] [api-check] The API server is healthy after 5.671139192s
	I1007 12:07:26.932589  401591 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 12:07:26.948791  401591 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 12:07:27.494371  401591 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 12:07:27.494637  401591 kubeadm.go:310] [mark-control-plane] Marking the node ha-628553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 12:07:27.512639  401591 kubeadm.go:310] [bootstrap-token] Using token: jd5sg7.ynaw0s6f9h2yr29w
	I1007 12:07:27.514508  401591 out.go:235]   - Configuring RBAC rules ...
	I1007 12:07:27.514678  401591 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 12:07:27.527273  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 12:07:27.537651  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 12:07:27.542026  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 12:07:27.545879  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 12:07:27.550174  401591 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 12:07:27.568355  401591 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 12:07:27.807712  401591 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 12:07:28.321610  401591 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 12:07:28.321657  401591 kubeadm.go:310] 
	I1007 12:07:28.321720  401591 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 12:07:28.321728  401591 kubeadm.go:310] 
	I1007 12:07:28.321852  401591 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 12:07:28.321870  401591 kubeadm.go:310] 
	I1007 12:07:28.321904  401591 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 12:07:28.321987  401591 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 12:07:28.322064  401591 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 12:07:28.322074  401591 kubeadm.go:310] 
	I1007 12:07:28.322155  401591 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 12:07:28.322171  401591 kubeadm.go:310] 
	I1007 12:07:28.322225  401591 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 12:07:28.322234  401591 kubeadm.go:310] 
	I1007 12:07:28.322293  401591 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 12:07:28.322386  401591 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 12:07:28.322471  401591 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 12:07:28.322481  401591 kubeadm.go:310] 
	I1007 12:07:28.322608  401591 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 12:07:28.322677  401591 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 12:07:28.322684  401591 kubeadm.go:310] 
	I1007 12:07:28.322753  401591 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jd5sg7.ynaw0s6f9h2yr29w \
	I1007 12:07:28.322898  401591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 \
	I1007 12:07:28.322931  401591 kubeadm.go:310] 	--control-plane 
	I1007 12:07:28.322941  401591 kubeadm.go:310] 
	I1007 12:07:28.323057  401591 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 12:07:28.323067  401591 kubeadm.go:310] 
	I1007 12:07:28.323165  401591 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jd5sg7.ynaw0s6f9h2yr29w \
	I1007 12:07:28.323318  401591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 
	I1007 12:07:28.324193  401591 kubeadm.go:310] W1007 12:07:17.473376     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:07:28.324456  401591 kubeadm.go:310] W1007 12:07:17.474417     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:07:28.324568  401591 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:07:28.324604  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:07:28.324616  401591 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:07:28.326463  401591 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 12:07:28.327680  401591 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 12:07:28.333563  401591 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 12:07:28.333587  401591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 12:07:28.357058  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 12:07:28.763710  401591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:07:28.763800  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:28.763837  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553 minikube.k8s.io/updated_at=2024_10_07T12_07_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=true
	I1007 12:07:28.789823  401591 ops.go:34] apiserver oom_adj: -16
	I1007 12:07:28.939139  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:29.440288  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:29.939479  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:30.440099  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:30.940243  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:31.439830  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:31.939544  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:32.439274  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:32.691661  401591 kubeadm.go:1113] duration metric: took 3.927936335s to wait for elevateKubeSystemPrivileges
	I1007 12:07:32.691702  401591 kubeadm.go:394] duration metric: took 15.494065691s to StartCluster
	I1007 12:07:32.691720  401591 settings.go:142] acquiring lock: {Name:mk1ff033f29b570679652ae5ee30e0799b0658dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:32.691805  401591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:07:32.694409  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:32.695052  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 12:07:32.695056  401591 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:07:32.695093  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:07:32.695116  401591 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:07:32.695224  401591 addons.go:69] Setting storage-provisioner=true in profile "ha-628553"
	I1007 12:07:32.695233  401591 addons.go:69] Setting default-storageclass=true in profile "ha-628553"
	I1007 12:07:32.695246  401591 addons.go:234] Setting addon storage-provisioner=true in "ha-628553"
	I1007 12:07:32.695276  401591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-628553"
	I1007 12:07:32.695321  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:32.695278  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:07:32.695828  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.695856  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.695880  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.695904  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.713283  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I1007 12:07:32.713330  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I1007 12:07:32.713795  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.713821  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.714372  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.714404  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.714470  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.714495  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.714860  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.714918  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.715087  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.715613  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.715671  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.717649  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:07:32.717950  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:07:32.718459  401591 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 12:07:32.718801  401591 addons.go:234] Setting addon default-storageclass=true in "ha-628553"
	I1007 12:07:32.718846  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:07:32.719253  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.719305  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.733464  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45313
	I1007 12:07:32.734011  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.734570  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.734597  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.734946  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.735147  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.736496  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I1007 12:07:32.736815  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:32.737247  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.737699  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.737724  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.738090  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.738558  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.738606  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.739129  401591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:07:32.740633  401591 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:07:32.740659  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:07:32.740683  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:32.744392  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.744885  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:32.744914  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.745085  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:32.745311  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:32.745493  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:32.745635  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:32.755450  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I1007 12:07:32.756180  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.756775  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.756839  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.757215  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.757439  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.759112  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:32.759361  401591 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:07:32.759380  401591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:07:32.759399  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:32.761925  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.762241  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:32.762266  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.762381  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:32.762573  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:32.762681  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:32.762803  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:32.893511  401591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:07:32.927665  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 12:07:32.930086  401591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:07:33.749725  401591 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 12:07:33.749834  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.749857  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750070  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750085  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750150  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.750183  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750217  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750228  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750239  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750364  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750400  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750412  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750420  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750560  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.750625  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750637  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750639  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750662  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750758  401591 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:07:33.750779  401591 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:07:33.750910  401591 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 12:07:33.750920  401591 round_trippers.go:469] Request Headers:
	I1007 12:07:33.750933  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:07:33.750938  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:07:33.762601  401591 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:07:33.763351  401591 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 12:07:33.763370  401591 round_trippers.go:469] Request Headers:
	I1007 12:07:33.763378  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:07:33.763383  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:07:33.763386  401591 round_trippers.go:473]     Content-Type: application/json
	I1007 12:07:33.766118  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:07:33.766300  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.766313  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.766629  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.766646  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.766684  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.768511  401591 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 12:07:33.770162  401591 addons.go:510] duration metric: took 1.075047661s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 12:07:33.770212  401591 start.go:246] waiting for cluster config update ...
	I1007 12:07:33.770227  401591 start.go:255] writing updated cluster config ...
	I1007 12:07:33.772026  401591 out.go:201] 
	I1007 12:07:33.773570  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:33.773647  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:33.775167  401591 out.go:177] * Starting "ha-628553-m02" control-plane node in "ha-628553" cluster
	I1007 12:07:33.776386  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:07:33.776419  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:07:33.776564  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:07:33.776577  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:07:33.776670  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:33.776889  401591 start.go:360] acquireMachinesLock for ha-628553-m02: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:07:33.776949  401591 start.go:364] duration metric: took 33.552µs to acquireMachinesLock for "ha-628553-m02"
	I1007 12:07:33.776978  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:07:33.777088  401591 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 12:07:33.779624  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:07:33.779742  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:33.779791  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:33.795004  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I1007 12:07:33.795415  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:33.795909  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:33.795931  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:33.796264  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:33.796498  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:33.796628  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:33.796770  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:07:33.796805  401591 client.go:168] LocalClient.Create starting
	I1007 12:07:33.796847  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:07:33.796894  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:07:33.796911  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:07:33.796968  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:07:33.796987  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:07:33.796997  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:07:33.797015  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:07:33.797023  401591 main.go:141] libmachine: (ha-628553-m02) Calling .PreCreateCheck
	I1007 12:07:33.797222  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:33.797700  401591 main.go:141] libmachine: Creating machine...
	I1007 12:07:33.797714  401591 main.go:141] libmachine: (ha-628553-m02) Calling .Create
	I1007 12:07:33.797891  401591 main.go:141] libmachine: (ha-628553-m02) Creating KVM machine...
	I1007 12:07:33.799094  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found existing default KVM network
	I1007 12:07:33.799243  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found existing private KVM network mk-ha-628553
	I1007 12:07:33.799364  401591 main.go:141] libmachine: (ha-628553-m02) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 ...
	I1007 12:07:33.799377  401591 main.go:141] libmachine: (ha-628553-m02) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:07:33.799477  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:33.799367  401944 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:07:33.799603  401591 main.go:141] libmachine: (ha-628553-m02) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:07:34.069404  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.069235  401944 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa...
	I1007 12:07:34.176325  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.176157  401944 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/ha-628553-m02.rawdisk...
	I1007 12:07:34.176359  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Writing magic tar header
	I1007 12:07:34.176372  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Writing SSH key tar header
	I1007 12:07:34.176384  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.176303  401944 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 ...
	I1007 12:07:34.176398  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02
	I1007 12:07:34.176501  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:07:34.176544  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:07:34.176555  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 (perms=drwx------)
	I1007 12:07:34.176567  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:07:34.176576  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:07:34.176583  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:07:34.176594  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:07:34.176609  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:07:34.176622  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:07:34.176635  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:07:34.176651  401591 main.go:141] libmachine: (ha-628553-m02) Creating domain...
	I1007 12:07:34.176660  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:07:34.176668  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home
	I1007 12:07:34.176675  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Skipping /home - not owner
	I1007 12:07:34.177701  401591 main.go:141] libmachine: (ha-628553-m02) define libvirt domain using xml: 
	I1007 12:07:34.177730  401591 main.go:141] libmachine: (ha-628553-m02) <domain type='kvm'>
	I1007 12:07:34.177740  401591 main.go:141] libmachine: (ha-628553-m02)   <name>ha-628553-m02</name>
	I1007 12:07:34.177751  401591 main.go:141] libmachine: (ha-628553-m02)   <memory unit='MiB'>2200</memory>
	I1007 12:07:34.177759  401591 main.go:141] libmachine: (ha-628553-m02)   <vcpu>2</vcpu>
	I1007 12:07:34.177766  401591 main.go:141] libmachine: (ha-628553-m02)   <features>
	I1007 12:07:34.177777  401591 main.go:141] libmachine: (ha-628553-m02)     <acpi/>
	I1007 12:07:34.177786  401591 main.go:141] libmachine: (ha-628553-m02)     <apic/>
	I1007 12:07:34.177796  401591 main.go:141] libmachine: (ha-628553-m02)     <pae/>
	I1007 12:07:34.177809  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.177820  401591 main.go:141] libmachine: (ha-628553-m02)   </features>
	I1007 12:07:34.177834  401591 main.go:141] libmachine: (ha-628553-m02)   <cpu mode='host-passthrough'>
	I1007 12:07:34.177844  401591 main.go:141] libmachine: (ha-628553-m02)   
	I1007 12:07:34.177853  401591 main.go:141] libmachine: (ha-628553-m02)   </cpu>
	I1007 12:07:34.177864  401591 main.go:141] libmachine: (ha-628553-m02)   <os>
	I1007 12:07:34.177870  401591 main.go:141] libmachine: (ha-628553-m02)     <type>hvm</type>
	I1007 12:07:34.177876  401591 main.go:141] libmachine: (ha-628553-m02)     <boot dev='cdrom'/>
	I1007 12:07:34.177883  401591 main.go:141] libmachine: (ha-628553-m02)     <boot dev='hd'/>
	I1007 12:07:34.177888  401591 main.go:141] libmachine: (ha-628553-m02)     <bootmenu enable='no'/>
	I1007 12:07:34.177895  401591 main.go:141] libmachine: (ha-628553-m02)   </os>
	I1007 12:07:34.177900  401591 main.go:141] libmachine: (ha-628553-m02)   <devices>
	I1007 12:07:34.177910  401591 main.go:141] libmachine: (ha-628553-m02)     <disk type='file' device='cdrom'>
	I1007 12:07:34.177952  401591 main.go:141] libmachine: (ha-628553-m02)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/boot2docker.iso'/>
	I1007 12:07:34.177981  401591 main.go:141] libmachine: (ha-628553-m02)       <target dev='hdc' bus='scsi'/>
	I1007 12:07:34.177992  401591 main.go:141] libmachine: (ha-628553-m02)       <readonly/>
	I1007 12:07:34.178002  401591 main.go:141] libmachine: (ha-628553-m02)     </disk>
	I1007 12:07:34.178015  401591 main.go:141] libmachine: (ha-628553-m02)     <disk type='file' device='disk'>
	I1007 12:07:34.178028  401591 main.go:141] libmachine: (ha-628553-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:07:34.178044  401591 main.go:141] libmachine: (ha-628553-m02)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/ha-628553-m02.rawdisk'/>
	I1007 12:07:34.178055  401591 main.go:141] libmachine: (ha-628553-m02)       <target dev='hda' bus='virtio'/>
	I1007 12:07:34.178066  401591 main.go:141] libmachine: (ha-628553-m02)     </disk>
	I1007 12:07:34.178073  401591 main.go:141] libmachine: (ha-628553-m02)     <interface type='network'>
	I1007 12:07:34.178085  401591 main.go:141] libmachine: (ha-628553-m02)       <source network='mk-ha-628553'/>
	I1007 12:07:34.178102  401591 main.go:141] libmachine: (ha-628553-m02)       <model type='virtio'/>
	I1007 12:07:34.178114  401591 main.go:141] libmachine: (ha-628553-m02)     </interface>
	I1007 12:07:34.178125  401591 main.go:141] libmachine: (ha-628553-m02)     <interface type='network'>
	I1007 12:07:34.178138  401591 main.go:141] libmachine: (ha-628553-m02)       <source network='default'/>
	I1007 12:07:34.178148  401591 main.go:141] libmachine: (ha-628553-m02)       <model type='virtio'/>
	I1007 12:07:34.178157  401591 main.go:141] libmachine: (ha-628553-m02)     </interface>
	I1007 12:07:34.178172  401591 main.go:141] libmachine: (ha-628553-m02)     <serial type='pty'>
	I1007 12:07:34.178184  401591 main.go:141] libmachine: (ha-628553-m02)       <target port='0'/>
	I1007 12:07:34.178191  401591 main.go:141] libmachine: (ha-628553-m02)     </serial>
	I1007 12:07:34.178201  401591 main.go:141] libmachine: (ha-628553-m02)     <console type='pty'>
	I1007 12:07:34.178212  401591 main.go:141] libmachine: (ha-628553-m02)       <target type='serial' port='0'/>
	I1007 12:07:34.178223  401591 main.go:141] libmachine: (ha-628553-m02)     </console>
	I1007 12:07:34.178233  401591 main.go:141] libmachine: (ha-628553-m02)     <rng model='virtio'>
	I1007 12:07:34.178266  401591 main.go:141] libmachine: (ha-628553-m02)       <backend model='random'>/dev/random</backend>
	I1007 12:07:34.178292  401591 main.go:141] libmachine: (ha-628553-m02)     </rng>
	I1007 12:07:34.178303  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.178316  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.178324  401591 main.go:141] libmachine: (ha-628553-m02)   </devices>
	I1007 12:07:34.178331  401591 main.go:141] libmachine: (ha-628553-m02) </domain>
	I1007 12:07:34.178342  401591 main.go:141] libmachine: (ha-628553-m02) 
	I1007 12:07:34.185967  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:33:2a:81 in network default
	I1007 12:07:34.186520  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring networks are active...
	I1007 12:07:34.186550  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:34.187255  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring network default is active
	I1007 12:07:34.187562  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring network mk-ha-628553 is active
	I1007 12:07:34.187923  401591 main.go:141] libmachine: (ha-628553-m02) Getting domain xml...
	I1007 12:07:34.188741  401591 main.go:141] libmachine: (ha-628553-m02) Creating domain...
	I1007 12:07:35.460306  401591 main.go:141] libmachine: (ha-628553-m02) Waiting to get IP...
	I1007 12:07:35.461270  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.461715  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.461750  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.461693  401944 retry.go:31] will retry after 211.598538ms: waiting for machine to come up
	I1007 12:07:35.675347  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.675895  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.675927  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.675805  401944 retry.go:31] will retry after 296.849ms: waiting for machine to come up
	I1007 12:07:35.974395  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.974893  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.974954  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.974854  401944 retry.go:31] will retry after 388.404149ms: waiting for machine to come up
	I1007 12:07:36.365448  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:36.366155  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:36.366184  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:36.366075  401944 retry.go:31] will retry after 534.318698ms: waiting for machine to come up
	I1007 12:07:36.901907  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:36.902475  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:36.902512  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:36.902413  401944 retry.go:31] will retry after 649.263788ms: waiting for machine to come up
	I1007 12:07:37.553345  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:37.553872  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:37.553898  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:37.553792  401944 retry.go:31] will retry after 939.159086ms: waiting for machine to come up
	I1007 12:07:38.495133  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:38.495757  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:38.495785  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:38.495703  401944 retry.go:31] will retry after 913.128072ms: waiting for machine to come up
	I1007 12:07:39.410208  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:39.410778  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:39.410847  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:39.410734  401944 retry.go:31] will retry after 1.275296837s: waiting for machine to come up
	I1007 12:07:40.688215  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:40.688737  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:40.688763  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:40.688692  401944 retry.go:31] will retry after 1.706568868s: waiting for machine to come up
	I1007 12:07:42.397331  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:42.398210  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:42.398242  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:42.398140  401944 retry.go:31] will retry after 2.035219193s: waiting for machine to come up
	I1007 12:07:44.435063  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:44.435558  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:44.435604  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:44.435541  401944 retry.go:31] will retry after 2.129313504s: waiting for machine to come up
	I1007 12:07:46.567866  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:46.568337  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:46.568363  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:46.568294  401944 retry.go:31] will retry after 2.900138556s: waiting for machine to come up
	I1007 12:07:49.470446  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:49.470835  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:49.470861  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:49.470787  401944 retry.go:31] will retry after 2.802723119s: waiting for machine to come up
	I1007 12:07:52.276755  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:52.277120  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:52.277151  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:52.277100  401944 retry.go:31] will retry after 4.815030442s: waiting for machine to come up
	I1007 12:07:57.095944  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.096384  401591 main.go:141] libmachine: (ha-628553-m02) Found IP for machine: 192.168.39.169
	I1007 12:07:57.096411  401591 main.go:141] libmachine: (ha-628553-m02) Reserving static IP address...
	I1007 12:07:57.096424  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has current primary IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.096805  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find host DHCP lease matching {name: "ha-628553-m02", mac: "52:54:00:59:4a:2e", ip: "192.168.39.169"} in network mk-ha-628553
	I1007 12:07:57.173671  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Getting to WaitForSSH function...
	I1007 12:07:57.173707  401591 main.go:141] libmachine: (ha-628553-m02) Reserved static IP address: 192.168.39.169
	I1007 12:07:57.173721  401591 main.go:141] libmachine: (ha-628553-m02) Waiting for SSH to be available...
	I1007 12:07:57.176077  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.176414  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.176448  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.176591  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH client type: external
	I1007 12:07:57.176618  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa (-rw-------)
	I1007 12:07:57.176654  401591 main.go:141] libmachine: (ha-628553-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:07:57.176671  401591 main.go:141] libmachine: (ha-628553-m02) DBG | About to run SSH command:
	I1007 12:07:57.176683  401591 main.go:141] libmachine: (ha-628553-m02) DBG | exit 0
	I1007 12:07:57.299343  401591 main.go:141] libmachine: (ha-628553-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 12:07:57.299606  401591 main.go:141] libmachine: (ha-628553-m02) KVM machine creation complete!
	I1007 12:07:57.299951  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:57.300520  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:57.300733  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:57.300899  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:07:57.300909  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetState
	I1007 12:07:57.302247  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:07:57.302263  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:07:57.302270  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:07:57.302277  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.304689  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.305046  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.305083  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.305220  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.305416  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.305566  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.305687  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.305859  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.306075  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.306087  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:07:57.402628  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:57.402652  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:07:57.402660  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.405841  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.406213  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.406245  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.406443  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.406658  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.406871  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.407020  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.407143  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.407310  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.407320  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:07:57.503882  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:07:57.503964  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:07:57.503972  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:07:57.503980  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.504231  401591 buildroot.go:166] provisioning hostname "ha-628553-m02"
	I1007 12:07:57.504259  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.504487  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.507249  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.507577  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.507606  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.507742  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.507923  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.508054  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.508176  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.508480  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.508681  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.508694  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m02 && echo "ha-628553-m02" | sudo tee /etc/hostname
	I1007 12:07:57.622198  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m02
	
	I1007 12:07:57.622239  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.625084  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.625439  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.625478  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.625644  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.625837  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.626007  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.626130  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.626308  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.626503  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.626525  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:07:57.732566  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:57.732598  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:07:57.732622  401591 buildroot.go:174] setting up certificates
	I1007 12:07:57.732636  401591 provision.go:84] configureAuth start
	I1007 12:07:57.732649  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.732948  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:57.735493  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.735786  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.735817  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.735963  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.737975  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.738293  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.738318  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.738455  401591 provision.go:143] copyHostCerts
	I1007 12:07:57.738486  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:57.738525  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:07:57.738541  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:57.738610  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:07:57.738684  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:57.738703  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:07:57.738710  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:57.738733  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:07:57.738777  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:57.738793  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:07:57.738800  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:57.738820  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:07:57.738866  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m02 san=[127.0.0.1 192.168.39.169 ha-628553-m02 localhost minikube]
	I1007 12:07:58.143814  401591 provision.go:177] copyRemoteCerts
	I1007 12:07:58.143882  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:07:58.143910  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.147250  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.147700  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.147742  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.147869  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.148081  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.148224  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.148327  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.230179  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:07:58.230271  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:07:58.258288  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:07:58.258382  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:07:58.285135  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:07:58.285208  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:07:58.312621  401591 provision.go:87] duration metric: took 579.970325ms to configureAuth
	I1007 12:07:58.312652  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:07:58.312828  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:58.312907  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.315586  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.315959  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.315990  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.316222  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.316422  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.316601  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.316743  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.316927  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:58.317142  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:58.317161  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:07:58.545249  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:07:58.545278  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:07:58.545290  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetURL
	I1007 12:07:58.546702  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using libvirt version 6000000
	I1007 12:07:58.548842  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.549284  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.549317  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.549407  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:07:58.549418  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:07:58.549424  401591 client.go:171] duration metric: took 24.752608877s to LocalClient.Create
	I1007 12:07:58.549459  401591 start.go:167] duration metric: took 24.752691243s to libmachine.API.Create "ha-628553"
	I1007 12:07:58.549474  401591 start.go:293] postStartSetup for "ha-628553-m02" (driver="kvm2")
	I1007 12:07:58.549489  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:07:58.549507  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.549760  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:07:58.549786  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.551787  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.552071  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.552105  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.552239  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.552437  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.552667  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.552832  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.629949  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:07:58.634600  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:07:58.634633  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:07:58.634716  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:07:58.634820  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:07:58.634833  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:07:58.634948  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:07:58.644927  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:58.670613  401591 start.go:296] duration metric: took 121.120015ms for postStartSetup
	I1007 12:07:58.670687  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:58.671316  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:58.673738  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.674117  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.674143  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.674429  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:58.674687  401591 start.go:128] duration metric: took 24.897586771s to createHost
	I1007 12:07:58.674717  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.676881  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.677232  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.677261  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.677369  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.677545  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.677717  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.677844  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.677997  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:58.678177  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:58.678188  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:07:58.776120  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302878.748851389
	
	I1007 12:07:58.776147  401591 fix.go:216] guest clock: 1728302878.748851389
	I1007 12:07:58.776158  401591 fix.go:229] Guest: 2024-10-07 12:07:58.748851389 +0000 UTC Remote: 2024-10-07 12:07:58.674704612 +0000 UTC m=+72.466738357 (delta=74.146777ms)
	I1007 12:07:58.776181  401591 fix.go:200] guest clock delta is within tolerance: 74.146777ms
	I1007 12:07:58.776187  401591 start.go:83] releasing machines lock for "ha-628553-m02", held for 24.999226116s
	I1007 12:07:58.776211  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.776496  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:58.779145  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.779528  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.779560  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.782069  401591 out.go:177] * Found network options:
	I1007 12:07:58.783459  401591 out.go:177]   - NO_PROXY=192.168.39.110
	W1007 12:07:58.784861  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:07:58.784899  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785569  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785759  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785866  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:07:58.785905  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	W1007 12:07:58.785978  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:07:58.786070  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:07:58.786094  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.788699  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.788936  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789075  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.789100  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789286  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.789381  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.789402  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789444  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.789536  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.789631  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.789706  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.789783  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.789824  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.789925  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:59.016879  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:07:59.023633  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:07:59.023710  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:07:59.041152  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:07:59.041183  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:07:59.041268  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:07:59.058168  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:07:59.074089  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:07:59.074153  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:07:59.089704  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:07:59.104808  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:07:59.234539  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:07:59.391501  401591 docker.go:233] disabling docker service ...
	I1007 12:07:59.391564  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:07:59.406313  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:07:59.420588  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:07:59.553910  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:07:59.664194  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:07:59.679241  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:07:59.699517  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:07:59.699594  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.710670  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:07:59.710739  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.721864  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.733897  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.746035  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:07:59.757811  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.769881  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.789700  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.800942  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:07:59.811016  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:07:59.811084  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:07:59.827337  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:07:59.838316  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:59.964123  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:08:00.067227  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:08:00.067310  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:08:00.073044  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:08:00.073120  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:08:00.077800  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:08:00.127300  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:08:00.127397  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:08:00.156941  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:08:00.190072  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:08:00.191853  401591 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:08:00.193177  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:08:00.196263  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:08:00.196746  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:08:00.196779  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:08:00.196928  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:08:00.201903  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:00.215603  401591 mustload.go:65] Loading cluster: ha-628553
	I1007 12:08:00.215803  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:00.216063  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:00.216108  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:00.231500  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I1007 12:08:00.231984  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:00.232515  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:00.232538  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:00.232906  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:00.233117  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:08:00.234754  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:08:00.235153  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:00.235205  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:00.251119  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I1007 12:08:00.251713  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:00.252244  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:00.252269  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:00.252599  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:00.252779  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:08:00.252870  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.169
	I1007 12:08:00.252879  401591 certs.go:194] generating shared ca certs ...
	I1007 12:08:00.252902  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.253042  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:08:00.253085  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:08:00.253095  401591 certs.go:256] generating profile certs ...
	I1007 12:08:00.253179  401591 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:08:00.253210  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7
	I1007 12:08:00.253235  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.254]
	I1007 12:08:00.386276  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 ...
	I1007 12:08:00.386312  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7: {Name:mk3203e0eda21b3db6f2dd0a690d84683948f867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.386525  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7 ...
	I1007 12:08:00.386553  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7: {Name:mkfc3d62b17b51155465b7666879f42f7347e54c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.386666  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:08:00.386851  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:08:00.387056  401591 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:08:00.387074  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:08:00.387092  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:08:00.387112  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:08:00.387134  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:08:00.387151  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:08:00.387168  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:08:00.387184  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:08:00.387203  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:08:00.387277  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:08:00.387324  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:08:00.387338  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:08:00.387372  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:08:00.387402  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:08:00.387436  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:08:00.387492  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:08:00.387532  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:08:00.387560  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:08:00.387578  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:00.387630  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:08:00.391299  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:00.391779  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:08:00.391810  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:00.392002  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:08:00.392226  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:08:00.392412  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:08:00.392620  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:08:00.467476  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:08:00.476301  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:08:00.489016  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:08:00.494136  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:08:00.509194  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:08:00.513966  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:08:00.525972  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:08:00.530730  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:08:00.543099  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:08:00.548533  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:08:00.560887  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:08:00.565537  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:08:00.578649  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:08:00.607063  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:08:00.634228  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:08:00.660702  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:08:00.687010  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 12:08:00.713721  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:08:00.740934  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:08:00.768133  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:08:00.794572  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:08:00.820864  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:08:00.847539  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:08:00.876441  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:08:00.895435  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:08:00.913785  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:08:00.932908  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:08:00.951947  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:08:00.969974  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:08:00.988515  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:08:01.007600  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:08:01.014010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:08:01.025708  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.030507  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.030585  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.037094  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:08:01.049368  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:08:01.062454  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.067451  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.067538  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.073743  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:08:01.085386  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:08:01.096871  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.102352  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.102441  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.108559  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:08:01.120791  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:08:01.125796  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:08:01.125854  401591 kubeadm.go:934] updating node {m02 192.168.39.169 8443 v1.31.1 crio true true} ...
	I1007 12:08:01.125945  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:08:01.125972  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:08:01.126011  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:08:01.142927  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:08:01.143035  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:08:01.143100  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:08:01.154825  401591 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:08:01.154901  401591 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:08:01.166246  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:08:01.166280  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:08:01.166330  401591 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 12:08:01.166350  401591 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 12:08:01.166352  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:08:01.171889  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:08:01.171923  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:08:01.865609  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:08:01.865701  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:08:01.871954  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:08:01.872006  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:08:01.960218  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:08:02.002318  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:08:02.002440  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:08:02.020653  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:08:02.020697  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:08:02.500270  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:08:02.510702  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:08:02.529075  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:08:02.546750  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:08:02.565165  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:08:02.569362  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:02.582612  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:02.707124  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:02.725325  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:08:02.725700  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:02.725750  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:02.741913  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I1007 12:08:02.742441  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:02.742930  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:02.742953  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:02.743338  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:02.743547  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:08:02.743717  401591 start.go:317] joinCluster: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:08:02.743844  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:08:02.743869  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:08:02.747217  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:02.747665  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:08:02.747694  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:02.747872  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:08:02.748048  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:08:02.748193  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:08:02.748311  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:08:02.893504  401591 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:02.893569  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xsg4ou.msqa1mnarg4j4fst --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m02 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443"
	I1007 12:08:24.411215  401591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xsg4ou.msqa1mnarg4j4fst --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m02 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443": (21.517602331s)
	I1007 12:08:24.411250  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:08:24.991460  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553-m02 minikube.k8s.io/updated_at=2024_10_07T12_08_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=false
	I1007 12:08:25.149659  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-628553-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:08:25.289097  401591 start.go:319] duration metric: took 22.545377397s to joinCluster
	I1007 12:08:25.289200  401591 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:25.289529  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:25.291070  401591 out.go:177] * Verifying Kubernetes components...
	I1007 12:08:25.292571  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:25.564988  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:25.614504  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:08:25.614869  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:08:25.614979  401591 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:08:25.615327  401591 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:08:25.615461  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:25.615476  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:25.615490  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:25.615502  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:25.627711  401591 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 12:08:26.115662  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:26.115688  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:26.115696  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:26.115700  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:26.119790  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:26.615649  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:26.615673  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:26.615681  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:26.615685  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:26.619911  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.115994  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:27.116020  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:27.116029  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:27.116032  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:27.120154  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.616200  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:27.616222  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:27.616230  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:27.616234  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:27.620627  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.621267  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:28.116293  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:28.116321  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:28.116331  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:28.116337  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:28.121199  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:28.616216  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:28.616252  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:28.616260  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:28.616275  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:28.624618  401591 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:08:29.116125  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:29.116148  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:29.116156  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:29.116161  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:29.143192  401591 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1007 12:08:29.616218  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:29.616252  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:29.616260  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:29.616263  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:29.621645  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:29.622758  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:30.116377  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:30.116414  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:30.116434  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:30.116442  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:30.120276  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:30.616264  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:30.616289  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:30.616298  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:30.616302  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:30.619656  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:31.115662  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:31.115686  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:31.115695  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:31.115698  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:31.120037  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:31.616077  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:31.616103  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:31.616112  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:31.616119  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.027207  401591 round_trippers.go:574] Response Status: 200 OK in 411 milliseconds
	I1007 12:08:32.028035  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:32.116023  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:32.116049  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:32.116061  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:32.116066  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.123800  401591 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:08:32.615910  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:32.615936  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:32.615945  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:32.615949  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.619848  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:33.115622  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:33.115645  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:33.115652  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:33.115657  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:33.119744  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:33.616336  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:33.616363  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:33.616372  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:33.616378  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:33.620139  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:34.116322  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:34.116357  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:34.116368  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:34.116374  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:34.119958  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:34.120614  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:34.615645  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:34.615672  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:34.615682  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:34.615687  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:34.619017  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:35.115922  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:35.115951  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:35.115965  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:35.115969  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:35.119735  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:35.615551  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:35.615578  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:35.615589  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:35.615595  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:35.619854  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:36.115806  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:36.115830  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:36.115839  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:36.115842  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:36.119509  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:36.616590  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:36.616626  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:36.616638  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:36.616646  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:36.620711  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:36.621977  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:37.116201  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:37.116229  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:37.116237  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:37.116241  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:37.119861  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:37.615763  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:37.615789  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:37.615798  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:37.615801  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:37.619542  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:38.116230  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:38.116254  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:38.116262  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:38.116266  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:38.119599  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:38.616300  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:38.616327  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:38.616336  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:38.616340  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:38.622637  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:08:38.623148  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:39.116056  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:39.116089  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:39.116102  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:39.116108  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:39.119313  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:39.615634  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:39.615660  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:39.615668  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:39.615672  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:39.619449  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:40.116288  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:40.116318  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:40.116330  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:40.116337  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:40.120596  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:40.615608  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:40.615636  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:40.615645  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:40.615650  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:40.619654  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:41.115684  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:41.115712  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:41.115723  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:41.115729  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:41.119362  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:41.119941  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:41.616052  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:41.616080  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:41.616092  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:41.616099  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:41.621355  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:42.116153  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:42.116179  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:42.116190  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:42.116195  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:42.119158  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:42.615813  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:42.615838  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:42.615849  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:42.615856  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:42.619479  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.116150  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.116183  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.116193  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.116197  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.119726  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.120412  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:43.615803  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.615825  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.615833  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.615837  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.619282  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.619820  401591 node_ready.go:49] node "ha-628553-m02" has status "Ready":"True"
	I1007 12:08:43.619840  401591 node_ready.go:38] duration metric: took 18.00448517s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:08:43.619850  401591 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:43.619942  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:43.619953  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.619962  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.619968  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.625430  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:43.631358  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.631464  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:08:43.631473  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.631481  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.631485  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.634796  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.635822  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.635842  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.635852  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.635858  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.638589  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.639211  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.639241  401591 pod_ready.go:82] duration metric: took 7.850216ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.639256  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.639336  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:08:43.639349  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.639360  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.639367  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.642168  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.642861  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.642879  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.642885  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.642891  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.645645  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.646131  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.646152  401591 pod_ready.go:82] duration metric: took 6.888201ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.646164  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.646225  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:08:43.646233  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.646240  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.646244  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.649034  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.649700  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.649718  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.649726  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.649731  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.652932  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.653474  401591 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.653494  401591 pod_ready.go:82] duration metric: took 7.324392ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.653506  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.653570  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:08:43.653578  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.653585  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.653589  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.656625  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.657314  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.657332  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.657340  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.657344  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.659929  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.660411  401591 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.660431  401591 pod_ready.go:82] duration metric: took 6.918652ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.660446  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.816876  401591 request.go:632] Waited for 156.326759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:08:43.816939  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:08:43.816943  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.816951  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.816956  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.820806  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.015988  401591 request.go:632] Waited for 194.312012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.016073  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.016081  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.016091  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.016121  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.019609  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.020136  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.020158  401591 pod_ready.go:82] duration metric: took 359.705878ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.020169  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.216359  401591 request.go:632] Waited for 196.109348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:08:44.216441  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:08:44.216449  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.216460  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.216468  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.222633  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:08:44.416891  401591 request.go:632] Waited for 193.411987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:44.416975  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:44.416983  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.416993  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.416999  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.420954  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.421562  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.421582  401591 pod_ready.go:82] duration metric: took 401.406583ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.421592  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.616625  401591 request.go:632] Waited for 194.940502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:08:44.616688  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:08:44.616693  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.616701  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.616707  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.620706  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.815865  401591 request.go:632] Waited for 194.348456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.815947  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.815954  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.815966  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.815972  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.819923  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.820749  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.820767  401591 pod_ready.go:82] duration metric: took 399.169132ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.820778  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.015880  401591 request.go:632] Waited for 195.028084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:08:45.015978  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:08:45.015983  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.015991  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.015997  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.020421  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.216616  401591 request.go:632] Waited for 195.391964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:45.216689  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:45.216696  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.216707  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.216712  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.221024  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.221697  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:45.221728  401591 pod_ready.go:82] duration metric: took 400.942386ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.221743  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.416754  401591 request.go:632] Waited for 194.909444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:08:45.416821  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:08:45.416834  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.416842  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.416848  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.421020  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.616294  401591 request.go:632] Waited for 194.468244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:45.616378  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:45.616387  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.616399  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.616406  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.620542  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.621474  401591 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:45.621500  401591 pod_ready.go:82] duration metric: took 399.748616ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.621515  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.816631  401591 request.go:632] Waited for 195.03231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:08:45.816699  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:08:45.816705  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.816713  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.816718  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.820607  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:46.016805  401591 request.go:632] Waited for 195.41966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.016911  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.016918  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.016926  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.016930  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.021351  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.021889  401591 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.021914  401591 pod_ready.go:82] duration metric: took 400.391171ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.021926  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.215992  401591 request.go:632] Waited for 193.955382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:08:46.216085  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:08:46.216092  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.216102  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.216108  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.219547  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:46.416084  401591 request.go:632] Waited for 195.950012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:46.416159  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:46.416167  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.416179  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.416198  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.420356  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.420972  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.420993  401591 pod_ready.go:82] duration metric: took 399.057557ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.421005  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.616254  401591 request.go:632] Waited for 195.135703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:08:46.616343  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:08:46.616355  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.616366  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.616375  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.625428  401591 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:08:46.816391  401591 request.go:632] Waited for 190.390972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.816468  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.816473  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.816482  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.816488  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.820601  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.821110  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.821133  401591 pod_ready.go:82] duration metric: took 400.121331ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.821145  401591 pod_ready.go:39] duration metric: took 3.201283112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:46.821161  401591 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:08:46.821222  401591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:08:46.839291  401591 api_server.go:72] duration metric: took 21.550041864s to wait for apiserver process to appear ...
	I1007 12:08:46.839326  401591 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:08:46.839354  401591 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:08:46.845263  401591 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:08:46.845352  401591 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:08:46.845360  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.845369  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.845373  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.846772  401591 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1007 12:08:46.846883  401591 api_server.go:141] control plane version: v1.31.1
	I1007 12:08:46.846902  401591 api_server.go:131] duration metric: took 7.569264ms to wait for apiserver health ...
	I1007 12:08:46.846910  401591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:08:47.016224  401591 request.go:632] Waited for 169.208213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.016315  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.016324  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.016337  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.016348  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.021945  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:47.026191  401591 system_pods.go:59] 17 kube-system pods found
	I1007 12:08:47.026232  401591 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:08:47.026238  401591 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:08:47.026242  401591 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:08:47.026246  401591 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:08:47.026251  401591 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:08:47.026255  401591 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:08:47.026260  401591 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:08:47.026264  401591 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:08:47.026268  401591 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:08:47.026273  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:08:47.026276  401591 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:08:47.026279  401591 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:08:47.026282  401591 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:08:47.026285  401591 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:08:47.026288  401591 system_pods.go:61] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:08:47.026291  401591 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:08:47.026294  401591 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:08:47.026300  401591 system_pods.go:74] duration metric: took 179.385599ms to wait for pod list to return data ...
	I1007 12:08:47.026311  401591 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:08:47.216777  401591 request.go:632] Waited for 190.349118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:08:47.216844  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:08:47.216851  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.216861  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.216867  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.220501  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:47.220765  401591 default_sa.go:45] found service account: "default"
	I1007 12:08:47.220790  401591 default_sa.go:55] duration metric: took 194.471685ms for default service account to be created ...
	I1007 12:08:47.220803  401591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:08:47.416131  401591 request.go:632] Waited for 195.245207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.416207  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.416215  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.416224  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.416238  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.422085  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:47.426776  401591 system_pods.go:86] 17 kube-system pods found
	I1007 12:08:47.426812  401591 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:08:47.426820  401591 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:08:47.426826  401591 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:08:47.426832  401591 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:08:47.426837  401591 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:08:47.426842  401591 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:08:47.426848  401591 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:08:47.426853  401591 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:08:47.426858  401591 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:08:47.426863  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:08:47.426868  401591 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:08:47.426873  401591 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:08:47.426881  401591 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:08:47.426887  401591 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:08:47.426892  401591 system_pods.go:89] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:08:47.426898  401591 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:08:47.426907  401591 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:08:47.426918  401591 system_pods.go:126] duration metric: took 206.105758ms to wait for k8s-apps to be running ...
	I1007 12:08:47.426931  401591 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:08:47.427006  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:08:47.444273  401591 system_svc.go:56] duration metric: took 17.328443ms WaitForService to wait for kubelet
	I1007 12:08:47.444313  401591 kubeadm.go:582] duration metric: took 22.155070744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:08:47.444339  401591 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:08:47.616864  401591 request.go:632] Waited for 172.422315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:08:47.616938  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:08:47.616945  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.616961  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.616969  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.621972  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:47.622888  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:08:47.622919  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:08:47.622945  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:08:47.622950  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:08:47.622955  401591 node_conditions.go:105] duration metric: took 178.610758ms to run NodePressure ...
	I1007 12:08:47.622983  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:08:47.623014  401591 start.go:255] writing updated cluster config ...
	I1007 12:08:47.625468  401591 out.go:201] 
	I1007 12:08:47.627200  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:47.627328  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:08:47.629319  401591 out.go:177] * Starting "ha-628553-m03" control-plane node in "ha-628553" cluster
	I1007 12:08:47.630767  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:08:47.630807  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:08:47.630955  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:08:47.630986  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:08:47.631145  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:08:47.631383  401591 start.go:360] acquireMachinesLock for ha-628553-m03: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:08:47.631439  401591 start.go:364] duration metric: took 32.151µs to acquireMachinesLock for "ha-628553-m03"
	I1007 12:08:47.631463  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:47.631573  401591 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 12:08:47.633396  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:08:47.633527  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:47.633570  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:47.650117  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I1007 12:08:47.650636  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:47.651158  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:47.651181  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:47.651622  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:47.651783  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:08:47.651941  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:08:47.652092  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:08:47.652123  401591 client.go:168] LocalClient.Create starting
	I1007 12:08:47.652165  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:08:47.652208  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:08:47.652231  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:08:47.652328  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:08:47.652361  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:08:47.652377  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:08:47.652400  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:08:47.652412  401591 main.go:141] libmachine: (ha-628553-m03) Calling .PreCreateCheck
	I1007 12:08:47.652572  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:08:47.652989  401591 main.go:141] libmachine: Creating machine...
	I1007 12:08:47.653006  401591 main.go:141] libmachine: (ha-628553-m03) Calling .Create
	I1007 12:08:47.653161  401591 main.go:141] libmachine: (ha-628553-m03) Creating KVM machine...
	I1007 12:08:47.654461  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found existing default KVM network
	I1007 12:08:47.654504  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found existing private KVM network mk-ha-628553
	I1007 12:08:47.654721  401591 main.go:141] libmachine: (ha-628553-m03) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 ...
	I1007 12:08:47.654751  401591 main.go:141] libmachine: (ha-628553-m03) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:08:47.654817  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:47.654705  402350 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:08:47.654927  401591 main.go:141] libmachine: (ha-628553-m03) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:08:47.943561  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:47.943397  402350 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa...
	I1007 12:08:48.157872  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:48.157710  402350 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/ha-628553-m03.rawdisk...
	I1007 12:08:48.157916  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Writing magic tar header
	I1007 12:08:48.157932  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Writing SSH key tar header
	I1007 12:08:48.157944  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:48.157825  402350 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 ...
	I1007 12:08:48.157970  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03
	I1007 12:08:48.158063  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 (perms=drwx------)
	I1007 12:08:48.158107  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:08:48.158121  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:08:48.158141  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:08:48.158150  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:08:48.158232  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:08:48.158257  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:08:48.158266  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:08:48.158280  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:08:48.158289  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:08:48.158307  401591 main.go:141] libmachine: (ha-628553-m03) Creating domain...
	I1007 12:08:48.158321  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:08:48.158335  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home
	I1007 12:08:48.158350  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Skipping /home - not owner
	I1007 12:08:48.159295  401591 main.go:141] libmachine: (ha-628553-m03) define libvirt domain using xml: 
	I1007 12:08:48.159314  401591 main.go:141] libmachine: (ha-628553-m03) <domain type='kvm'>
	I1007 12:08:48.159321  401591 main.go:141] libmachine: (ha-628553-m03)   <name>ha-628553-m03</name>
	I1007 12:08:48.159327  401591 main.go:141] libmachine: (ha-628553-m03)   <memory unit='MiB'>2200</memory>
	I1007 12:08:48.159361  401591 main.go:141] libmachine: (ha-628553-m03)   <vcpu>2</vcpu>
	I1007 12:08:48.159380  401591 main.go:141] libmachine: (ha-628553-m03)   <features>
	I1007 12:08:48.159389  401591 main.go:141] libmachine: (ha-628553-m03)     <acpi/>
	I1007 12:08:48.159398  401591 main.go:141] libmachine: (ha-628553-m03)     <apic/>
	I1007 12:08:48.159406  401591 main.go:141] libmachine: (ha-628553-m03)     <pae/>
	I1007 12:08:48.159416  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159423  401591 main.go:141] libmachine: (ha-628553-m03)   </features>
	I1007 12:08:48.159430  401591 main.go:141] libmachine: (ha-628553-m03)   <cpu mode='host-passthrough'>
	I1007 12:08:48.159437  401591 main.go:141] libmachine: (ha-628553-m03)   
	I1007 12:08:48.159446  401591 main.go:141] libmachine: (ha-628553-m03)   </cpu>
	I1007 12:08:48.159455  401591 main.go:141] libmachine: (ha-628553-m03)   <os>
	I1007 12:08:48.159465  401591 main.go:141] libmachine: (ha-628553-m03)     <type>hvm</type>
	I1007 12:08:48.159477  401591 main.go:141] libmachine: (ha-628553-m03)     <boot dev='cdrom'/>
	I1007 12:08:48.159488  401591 main.go:141] libmachine: (ha-628553-m03)     <boot dev='hd'/>
	I1007 12:08:48.159499  401591 main.go:141] libmachine: (ha-628553-m03)     <bootmenu enable='no'/>
	I1007 12:08:48.159508  401591 main.go:141] libmachine: (ha-628553-m03)   </os>
	I1007 12:08:48.159518  401591 main.go:141] libmachine: (ha-628553-m03)   <devices>
	I1007 12:08:48.159527  401591 main.go:141] libmachine: (ha-628553-m03)     <disk type='file' device='cdrom'>
	I1007 12:08:48.159543  401591 main.go:141] libmachine: (ha-628553-m03)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/boot2docker.iso'/>
	I1007 12:08:48.159554  401591 main.go:141] libmachine: (ha-628553-m03)       <target dev='hdc' bus='scsi'/>
	I1007 12:08:48.159561  401591 main.go:141] libmachine: (ha-628553-m03)       <readonly/>
	I1007 12:08:48.159571  401591 main.go:141] libmachine: (ha-628553-m03)     </disk>
	I1007 12:08:48.159579  401591 main.go:141] libmachine: (ha-628553-m03)     <disk type='file' device='disk'>
	I1007 12:08:48.159596  401591 main.go:141] libmachine: (ha-628553-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:08:48.159611  401591 main.go:141] libmachine: (ha-628553-m03)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/ha-628553-m03.rawdisk'/>
	I1007 12:08:48.159621  401591 main.go:141] libmachine: (ha-628553-m03)       <target dev='hda' bus='virtio'/>
	I1007 12:08:48.159629  401591 main.go:141] libmachine: (ha-628553-m03)     </disk>
	I1007 12:08:48.159639  401591 main.go:141] libmachine: (ha-628553-m03)     <interface type='network'>
	I1007 12:08:48.159647  401591 main.go:141] libmachine: (ha-628553-m03)       <source network='mk-ha-628553'/>
	I1007 12:08:48.159659  401591 main.go:141] libmachine: (ha-628553-m03)       <model type='virtio'/>
	I1007 12:08:48.159667  401591 main.go:141] libmachine: (ha-628553-m03)     </interface>
	I1007 12:08:48.159677  401591 main.go:141] libmachine: (ha-628553-m03)     <interface type='network'>
	I1007 12:08:48.159685  401591 main.go:141] libmachine: (ha-628553-m03)       <source network='default'/>
	I1007 12:08:48.159695  401591 main.go:141] libmachine: (ha-628553-m03)       <model type='virtio'/>
	I1007 12:08:48.159702  401591 main.go:141] libmachine: (ha-628553-m03)     </interface>
	I1007 12:08:48.159711  401591 main.go:141] libmachine: (ha-628553-m03)     <serial type='pty'>
	I1007 12:08:48.159722  401591 main.go:141] libmachine: (ha-628553-m03)       <target port='0'/>
	I1007 12:08:48.159732  401591 main.go:141] libmachine: (ha-628553-m03)     </serial>
	I1007 12:08:48.159741  401591 main.go:141] libmachine: (ha-628553-m03)     <console type='pty'>
	I1007 12:08:48.159751  401591 main.go:141] libmachine: (ha-628553-m03)       <target type='serial' port='0'/>
	I1007 12:08:48.159759  401591 main.go:141] libmachine: (ha-628553-m03)     </console>
	I1007 12:08:48.159769  401591 main.go:141] libmachine: (ha-628553-m03)     <rng model='virtio'>
	I1007 12:08:48.159779  401591 main.go:141] libmachine: (ha-628553-m03)       <backend model='random'>/dev/random</backend>
	I1007 12:08:48.159786  401591 main.go:141] libmachine: (ha-628553-m03)     </rng>
	I1007 12:08:48.159791  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159796  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159801  401591 main.go:141] libmachine: (ha-628553-m03)   </devices>
	I1007 12:08:48.159807  401591 main.go:141] libmachine: (ha-628553-m03) </domain>
	I1007 12:08:48.159814  401591 main.go:141] libmachine: (ha-628553-m03) 
	I1007 12:08:48.167454  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:19:9b:6c in network default
	I1007 12:08:48.168104  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring networks are active...
	I1007 12:08:48.168135  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:48.168903  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring network default is active
	I1007 12:08:48.169240  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring network mk-ha-628553 is active
	I1007 12:08:48.169699  401591 main.go:141] libmachine: (ha-628553-m03) Getting domain xml...
	I1007 12:08:48.170532  401591 main.go:141] libmachine: (ha-628553-m03) Creating domain...
	I1007 12:08:49.440366  401591 main.go:141] libmachine: (ha-628553-m03) Waiting to get IP...
	I1007 12:08:49.441248  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:49.441739  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:49.441772  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:49.441711  402350 retry.go:31] will retry after 304.052486ms: waiting for machine to come up
	I1007 12:08:49.747277  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:49.747963  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:49.747996  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:49.747904  402350 retry.go:31] will retry after 363.120796ms: waiting for machine to come up
	I1007 12:08:50.113364  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.113854  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.113886  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.113784  402350 retry.go:31] will retry after 318.214065ms: waiting for machine to come up
	I1007 12:08:50.434117  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.434742  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.434772  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.434669  402350 retry.go:31] will retry after 557.05591ms: waiting for machine to come up
	I1007 12:08:50.993368  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.993877  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.993902  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.993839  402350 retry.go:31] will retry after 534.862367ms: waiting for machine to come up
	I1007 12:08:51.530722  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:51.531299  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:51.531330  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:51.531236  402350 retry.go:31] will retry after 674.225428ms: waiting for machine to come up
	I1007 12:08:52.207219  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:52.207779  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:52.207805  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:52.207744  402350 retry.go:31] will retry after 750.38088ms: waiting for machine to come up
	I1007 12:08:52.959912  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:52.960419  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:52.960456  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:52.960375  402350 retry.go:31] will retry after 1.032745665s: waiting for machine to come up
	I1007 12:08:53.994776  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:53.995316  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:53.995345  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:53.995259  402350 retry.go:31] will retry after 1.174624993s: waiting for machine to come up
	I1007 12:08:55.171247  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:55.171687  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:55.171709  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:55.171640  402350 retry.go:31] will retry after 2.315279218s: waiting for machine to come up
	I1007 12:08:57.488351  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:57.488810  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:57.488838  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:57.488771  402350 retry.go:31] will retry after 1.769995019s: waiting for machine to come up
	I1007 12:08:59.260072  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:59.260605  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:59.260637  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:59.260547  402350 retry.go:31] will retry after 3.352254545s: waiting for machine to come up
	I1007 12:09:02.616362  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:02.616828  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:09:02.616850  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:09:02.616780  402350 retry.go:31] will retry after 4.496920566s: waiting for machine to come up
	I1007 12:09:07.118974  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:07.119565  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:09:07.119593  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:09:07.119492  402350 retry.go:31] will retry after 4.132199874s: waiting for machine to come up
	I1007 12:09:11.256196  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.256790  401591 main.go:141] libmachine: (ha-628553-m03) Found IP for machine: 192.168.39.149
	I1007 12:09:11.256824  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has current primary IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.256833  401591 main.go:141] libmachine: (ha-628553-m03) Reserving static IP address...
	I1007 12:09:11.257175  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find host DHCP lease matching {name: "ha-628553-m03", mac: "52:54:00:3c:9f:34", ip: "192.168.39.149"} in network mk-ha-628553
	I1007 12:09:11.338093  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Getting to WaitForSSH function...
	I1007 12:09:11.338124  401591 main.go:141] libmachine: (ha-628553-m03) Reserved static IP address: 192.168.39.149
	I1007 12:09:11.338139  401591 main.go:141] libmachine: (ha-628553-m03) Waiting for SSH to be available...
	I1007 12:09:11.341396  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.341892  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.341925  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.342105  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH client type: external
	I1007 12:09:11.342133  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa (-rw-------)
	I1007 12:09:11.342177  401591 main.go:141] libmachine: (ha-628553-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:09:11.342197  401591 main.go:141] libmachine: (ha-628553-m03) DBG | About to run SSH command:
	I1007 12:09:11.342214  401591 main.go:141] libmachine: (ha-628553-m03) DBG | exit 0
	I1007 12:09:11.471281  401591 main.go:141] libmachine: (ha-628553-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 12:09:11.471621  401591 main.go:141] libmachine: (ha-628553-m03) KVM machine creation complete!
	I1007 12:09:11.471952  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:09:11.472582  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:11.472840  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:11.473024  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:09:11.473037  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetState
	I1007 12:09:11.474527  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:09:11.474548  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:09:11.474555  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:09:11.474563  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.477303  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.477650  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.477666  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.477788  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.477993  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.478174  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.478306  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.478470  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.478702  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.478716  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:09:11.587071  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:09:11.587095  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:09:11.587105  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.589883  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.590265  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.590295  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.590447  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.590647  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.590829  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.591025  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.591169  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.591356  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.591367  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:09:11.704302  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:09:11.704403  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:09:11.704415  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:09:11.704426  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.704723  401591 buildroot.go:166] provisioning hostname "ha-628553-m03"
	I1007 12:09:11.704750  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.704905  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.707646  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.708032  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.708062  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.708204  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.708466  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.708666  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.708795  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.708972  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.709229  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.709247  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m03 && echo "ha-628553-m03" | sudo tee /etc/hostname
	I1007 12:09:11.834437  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m03
	
	I1007 12:09:11.834498  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.837609  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.837983  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.838013  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.838374  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.838612  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.838805  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.839005  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.839175  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.839394  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.839420  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:09:11.962733  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:09:11.962765  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:09:11.962788  401591 buildroot.go:174] setting up certificates
	I1007 12:09:11.962801  401591 provision.go:84] configureAuth start
	I1007 12:09:11.962814  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.963127  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:11.965755  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.966166  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.966201  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.966379  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.968397  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.968678  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.968703  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.968812  401591 provision.go:143] copyHostCerts
	I1007 12:09:11.968847  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:09:11.968897  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:09:11.968910  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:09:11.968994  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:09:11.969133  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:09:11.969163  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:09:11.969173  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:09:11.969222  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:09:11.969301  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:09:11.969326  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:09:11.969332  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:09:11.969367  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:09:11.969444  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m03 san=[127.0.0.1 192.168.39.149 ha-628553-m03 localhost minikube]
	I1007 12:09:12.008085  401591 provision.go:177] copyRemoteCerts
	I1007 12:09:12.008153  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:09:12.008198  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.011020  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.011447  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.011479  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.011639  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.011896  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.012077  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.012241  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.099103  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:09:12.099196  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:09:12.129470  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:09:12.129570  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:09:12.156229  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:09:12.156324  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:09:12.182409  401591 provision.go:87] duration metric: took 219.592268ms to configureAuth
	I1007 12:09:12.182440  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:09:12.182689  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:12.182805  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.186445  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.186906  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.186942  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.187197  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.187409  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.187561  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.187701  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.187919  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:12.188176  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:12.188201  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:09:12.442162  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:09:12.442201  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:09:12.442252  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetURL
	I1007 12:09:12.443642  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using libvirt version 6000000
	I1007 12:09:12.445960  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.446454  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.446484  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.446704  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:09:12.446717  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:09:12.446724  401591 client.go:171] duration metric: took 24.794590297s to LocalClient.Create
	I1007 12:09:12.446748  401591 start.go:167] duration metric: took 24.794658821s to libmachine.API.Create "ha-628553"
	I1007 12:09:12.446758  401591 start.go:293] postStartSetup for "ha-628553-m03" (driver="kvm2")
	I1007 12:09:12.446768  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:09:12.446787  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.447044  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:09:12.447067  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.449182  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.449535  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.449578  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.449689  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.449866  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.450019  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.450128  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.538407  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:09:12.543112  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:09:12.543143  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:09:12.543238  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:09:12.543327  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:09:12.543349  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:09:12.543452  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:09:12.553965  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:09:12.580260  401591 start.go:296] duration metric: took 133.488077ms for postStartSetup
	I1007 12:09:12.580320  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:09:12.580945  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:12.583692  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.584096  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.584119  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.584577  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:09:12.584810  401591 start.go:128] duration metric: took 24.953224798s to createHost
	I1007 12:09:12.584834  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.586899  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.587276  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.587304  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.587460  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.587666  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.587811  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.587989  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.588157  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:12.588403  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:12.588416  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:09:12.699909  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302952.675618146
	
	I1007 12:09:12.699944  401591 fix.go:216] guest clock: 1728302952.675618146
	I1007 12:09:12.699957  401591 fix.go:229] Guest: 2024-10-07 12:09:12.675618146 +0000 UTC Remote: 2024-10-07 12:09:12.584823089 +0000 UTC m=+146.376856843 (delta=90.795057ms)
	I1007 12:09:12.699983  401591 fix.go:200] guest clock delta is within tolerance: 90.795057ms
	I1007 12:09:12.700015  401591 start.go:83] releasing machines lock for "ha-628553-m03", held for 25.068545198s
	I1007 12:09:12.700046  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.700343  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:12.703273  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.703654  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.703685  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.706106  401591 out.go:177] * Found network options:
	I1007 12:09:12.707602  401591 out.go:177]   - NO_PROXY=192.168.39.110,192.168.39.169
	W1007 12:09:12.709074  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:09:12.709105  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:09:12.709125  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.709903  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.710157  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.710281  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:09:12.710326  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	W1007 12:09:12.710331  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:09:12.710350  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:09:12.710418  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:09:12.710435  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.713091  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713270  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713549  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.713577  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713688  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.713709  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713890  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.713892  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.714094  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.714096  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.714290  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.714293  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.714448  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.714465  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.965758  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:09:12.972410  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:09:12.972510  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:09:12.991892  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:09:12.991924  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:09:12.992029  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:09:13.011092  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:09:13.027119  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:09:13.027197  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:09:13.043881  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:09:13.059996  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:09:13.194059  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:09:13.363286  401591 docker.go:233] disabling docker service ...
	I1007 12:09:13.363388  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:09:13.380238  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:09:13.395090  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:09:13.539822  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:09:13.684666  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:09:13.699806  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:09:13.721312  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:09:13.721394  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.734593  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:09:13.734678  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.746652  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.758752  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.770649  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:09:13.783579  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.796044  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.816090  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.829211  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:09:13.841584  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:09:13.841652  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:09:13.858346  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:09:13.870682  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:14.015562  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:09:14.112385  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:09:14.112472  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:09:14.117706  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:09:14.117785  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:09:14.121973  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:09:14.164678  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:09:14.164778  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:09:14.195026  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:09:14.228305  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:09:14.229710  401591 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:09:14.230954  401591 out.go:177]   - env NO_PROXY=192.168.39.110,192.168.39.169
	I1007 12:09:14.232215  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:14.235268  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:14.236414  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:14.236455  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:14.236834  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:09:14.241615  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:09:14.255885  401591 mustload.go:65] Loading cluster: ha-628553
	I1007 12:09:14.256171  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:14.256468  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:14.256525  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:14.272191  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35203
	I1007 12:09:14.272704  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:14.273292  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:14.273317  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:14.273675  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:14.273860  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:09:14.275739  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:09:14.276042  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:14.276078  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:14.291563  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34379
	I1007 12:09:14.291960  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:14.292503  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:14.292525  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:14.292841  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:14.293029  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:09:14.293266  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.149
	I1007 12:09:14.293282  401591 certs.go:194] generating shared ca certs ...
	I1007 12:09:14.293298  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.293454  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:09:14.293500  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:09:14.293518  401591 certs.go:256] generating profile certs ...
	I1007 12:09:14.293595  401591 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:09:14.293624  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5
	I1007 12:09:14.293644  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.149 192.168.39.254]
	I1007 12:09:14.510662  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 ...
	I1007 12:09:14.510698  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5: {Name:mke401c308480be9f53e9bff701f2e9e4cf3af88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.510883  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5 ...
	I1007 12:09:14.510897  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5: {Name:mk6ef257f67983b566726de1c934d8565c12b533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.510988  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:09:14.511123  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:09:14.511263  401591 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:09:14.511281  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:09:14.511294  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:09:14.511306  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:09:14.511318  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:09:14.511328  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:09:14.511341  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:09:14.511350  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:09:14.551130  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:09:14.551306  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:09:14.551354  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:09:14.551363  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:09:14.551385  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:09:14.551414  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:09:14.551453  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:09:14.551518  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:09:14.551570  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:14.551588  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:09:14.551601  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:09:14.551640  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:09:14.554905  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:14.555423  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:09:14.555460  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:14.555653  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:09:14.555879  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:09:14.556052  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:09:14.556195  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:09:14.631352  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:09:14.636908  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:09:14.651074  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:09:14.656279  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:09:14.669909  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:09:14.674787  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:09:14.685770  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:09:14.690694  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:09:14.702721  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:09:14.707691  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:09:14.719165  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:09:14.724048  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:09:14.737169  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:09:14.766716  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:09:14.794736  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:09:14.821693  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:09:14.848771  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:09:14.877403  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:09:14.903816  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:09:14.930704  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:09:14.958763  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:09:14.986639  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:09:15.012198  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:09:15.040552  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:09:15.060843  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:09:15.079624  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:09:15.099559  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:09:15.119015  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:09:15.138902  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:09:15.157844  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:09:15.176996  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:09:15.183306  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:09:15.195832  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.201336  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.201442  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.208010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:09:15.220845  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:09:15.233290  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.238387  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.238463  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.245368  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:09:15.257699  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:09:15.270151  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.274983  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.275048  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.281100  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:09:15.293845  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:09:15.298173  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:09:15.298242  401591 kubeadm.go:934] updating node {m03 192.168.39.149 8443 v1.31.1 crio true true} ...
	I1007 12:09:15.298356  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:09:15.298388  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:09:15.298436  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:09:15.316713  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:09:15.316806  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:09:15.316885  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:09:15.329178  401591 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:09:15.329260  401591 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:09:15.341535  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 12:09:15.341551  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 12:09:15.341569  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:09:15.341576  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:09:15.341585  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:09:15.341597  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:09:15.341641  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:09:15.341660  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:09:15.361141  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:09:15.361169  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:09:15.361188  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:09:15.361231  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:09:15.361273  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:09:15.361282  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:09:15.386048  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:09:15.386094  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:09:16.354010  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:09:16.365447  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:09:16.386247  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:09:16.405656  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:09:16.424160  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:09:16.428897  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:09:16.443784  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:16.576452  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:09:16.595070  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:09:16.595602  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:16.595675  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:16.612706  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40581
	I1007 12:09:16.613341  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:16.613998  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:16.614030  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:16.614425  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:16.614648  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:09:16.614817  401591 start.go:317] joinCluster: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:09:16.615034  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:09:16.615063  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:09:16.618382  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:16.618897  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:09:16.618931  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:16.619128  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:09:16.619318  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:09:16.619512  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:09:16.619676  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:09:16.786244  401591 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:09:16.786300  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lajva.py7n2yqd96dw6gb3 --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m03 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443"
	I1007 12:09:40.133777  401591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lajva.py7n2yqd96dw6gb3 --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m03 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443": (23.347442914s)
	I1007 12:09:40.133833  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:09:40.642262  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553-m03 minikube.k8s.io/updated_at=2024_10_07T12_09_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=false
	I1007 12:09:40.798800  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-628553-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:09:40.938486  401591 start.go:319] duration metric: took 24.323665443s to joinCluster
	I1007 12:09:40.938574  401591 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:09:40.938992  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:40.939839  401591 out.go:177] * Verifying Kubernetes components...
	I1007 12:09:40.941073  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:41.179331  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:09:41.207454  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:09:41.207837  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:09:41.207937  401591 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:09:41.208281  401591 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m03" to be "Ready" ...
	I1007 12:09:41.208393  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:41.208405  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:41.208416  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:41.208425  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:41.212516  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:41.709058  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:41.709088  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:41.709105  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:41.709111  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:41.712889  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:42.209244  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:42.209270  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:42.209282  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:42.209291  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:42.215411  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:42.708822  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:42.708852  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:42.708859  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:42.708864  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:42.712350  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:43.208783  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:43.208814  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:43.208825  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:43.208830  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:43.212641  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:43.213313  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:43.708554  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:43.708586  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:43.708598  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:43.708603  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:43.712869  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:44.209341  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:44.209369  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:44.209378  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:44.209383  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:44.213843  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:44.708627  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:44.708655  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:44.708667  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:44.708674  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:44.712946  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:45.208740  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:45.208767  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:45.208780  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:45.208787  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:45.212825  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:45.213803  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:45.709194  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:45.709226  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:45.709239  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:45.709244  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:45.713036  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:46.209154  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:46.209181  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:46.209192  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:46.209196  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:46.212466  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:46.708677  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:46.708707  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:46.708716  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:46.708724  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:46.712340  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:47.208818  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:47.208842  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:47.208851  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:47.208857  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:47.212615  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:47.709164  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:47.709193  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:47.709202  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:47.709205  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:47.713234  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:47.713781  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:48.209498  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:48.209525  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:48.209534  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:48.209537  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:48.213755  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:48.708587  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:48.708611  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:48.708621  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:48.708624  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:48.712036  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:49.208568  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:49.208592  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:49.208603  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:49.208607  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:49.211903  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:49.708691  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:49.708716  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:49.708725  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:49.708729  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:49.712776  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:50.208877  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:50.208902  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:50.208911  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:50.208914  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:50.212493  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:50.213081  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:50.709538  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:50.709562  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:50.709571  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:50.709575  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:50.713279  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:51.209230  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:51.209256  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:51.209265  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:51.209268  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:51.213382  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:51.708830  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:51.708854  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:51.708862  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:51.708866  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:51.712240  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:52.208900  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:52.208926  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:52.208939  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:52.208946  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:52.215313  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:52.216003  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:52.708705  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:52.708730  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:52.708738  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:52.708742  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:52.712616  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:53.209443  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:53.209470  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:53.209480  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:53.209484  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:53.220542  401591 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:09:53.709519  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:53.709546  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:53.709558  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:53.709564  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:53.716163  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:54.208707  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:54.208734  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:54.208746  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:54.208760  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:54.213435  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:54.708587  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:54.708610  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:54.708619  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:54.708622  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:54.712056  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:54.712859  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:55.209203  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:55.209231  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:55.209239  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:55.209245  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:55.212768  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:55.708667  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:55.708695  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:55.708703  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:55.708707  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:55.712313  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.209354  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:56.209383  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.209395  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.209403  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.213377  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.708881  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:56.708908  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.708919  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.708924  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.712370  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.712935  401591 node_ready.go:49] node "ha-628553-m03" has status "Ready":"True"
	I1007 12:09:56.712963  401591 node_ready.go:38] duration metric: took 15.504655916s for node "ha-628553-m03" to be "Ready" ...
	I1007 12:09:56.712977  401591 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:09:56.713073  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:09:56.713085  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.713097  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.713103  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.718978  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:09:56.726344  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.726456  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:09:56.726466  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.726474  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.726490  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.730546  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:56.731604  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.731626  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.731635  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.731641  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.735028  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.735631  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.735652  401591 pod_ready.go:82] duration metric: took 9.273238ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.735664  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.735733  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:09:56.735741  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.735750  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.735755  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.739406  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.740176  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.740199  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.740209  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.740214  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.743560  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.744246  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.744282  401591 pod_ready.go:82] duration metric: took 8.60988ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.744297  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.744377  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:09:56.744385  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.744394  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.744399  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.747762  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.748602  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.748620  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.748631  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.748635  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.751819  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.752620  401591 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.752643  401591 pod_ready.go:82] duration metric: took 8.33893ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.752653  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.752721  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:09:56.752728  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.752736  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.752744  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.755841  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.756900  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:56.756919  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.756928  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.756933  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.762051  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:09:56.762546  401591 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.762567  401591 pod_ready.go:82] duration metric: took 9.907016ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.762577  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.908942  401591 request.go:632] Waited for 146.263139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:09:56.909015  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:09:56.909020  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.909028  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.909033  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.912564  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.109760  401591 request.go:632] Waited for 196.38743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:57.109828  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:57.109833  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.109841  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.109845  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.113445  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.114014  401591 pod_ready.go:93] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.114033  401591 pod_ready.go:82] duration metric: took 351.449136ms for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.114057  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.309353  401591 request.go:632] Waited for 195.205622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:09:57.309419  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:09:57.309425  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.309432  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.309437  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.313075  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.509082  401591 request.go:632] Waited for 195.305317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:57.509151  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:57.509155  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.509166  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.509174  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.512625  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.513112  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.513132  401591 pod_ready.go:82] duration metric: took 399.067745ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.513143  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.709708  401591 request.go:632] Waited for 196.474408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:09:57.709781  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:09:57.709786  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.709794  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.709800  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.713831  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:57.908898  401591 request.go:632] Waited for 194.228676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:57.908982  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:57.908989  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.909010  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.909018  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.912443  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.912928  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.912946  401591 pod_ready.go:82] duration metric: took 399.796848ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.912957  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.109126  401591 request.go:632] Waited for 196.089672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:09:58.109228  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:09:58.109239  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.109254  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.109263  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.113302  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:58.309458  401591 request.go:632] Waited for 195.377342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:58.309526  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:58.309532  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.309540  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.309547  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.313264  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.313917  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:58.313941  401591 pod_ready.go:82] duration metric: took 400.976971ms for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.313953  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.508886  401591 request.go:632] Waited for 194.833329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:09:58.508952  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:09:58.508957  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.508965  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.508968  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.512699  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.709582  401591 request.go:632] Waited for 196.246847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:58.709646  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:58.709651  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.709659  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.709664  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.713267  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.713852  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:58.713872  401591 pod_ready.go:82] duration metric: took 399.911675ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.713882  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.909557  401591 request.go:632] Waited for 195.589727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:09:58.909638  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:09:58.909646  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.909658  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.909667  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.913323  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.109300  401591 request.go:632] Waited for 195.248412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:59.109385  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:59.109397  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.109413  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.109423  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.113724  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.114391  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.114424  401591 pod_ready.go:82] duration metric: took 400.532344ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.114440  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.309421  401591 request.go:632] Waited for 194.863237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:09:59.309496  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:09:59.309505  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.309513  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.309517  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.313524  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.509863  401591 request.go:632] Waited for 195.376113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.509933  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.509939  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.509947  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.509952  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.514238  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.514980  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.515006  401591 pod_ready.go:82] duration metric: took 400.556348ms for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.515021  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.708902  401591 request.go:632] Waited for 193.788377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:09:59.708979  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:09:59.708984  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.708994  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.708999  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.713254  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.909528  401591 request.go:632] Waited for 195.290175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.909618  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.909629  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.909647  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.909670  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.913334  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.913821  401591 pod_ready.go:93] pod "kube-proxy-956k4" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.913839  401591 pod_ready.go:82] duration metric: took 398.810891ms for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.913849  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.108920  401591 request.go:632] Waited for 194.960284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:10:00.108989  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:10:00.108994  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.109003  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.109008  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.112562  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.309314  401591 request.go:632] Waited for 195.880007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:00.309383  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:00.309388  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.309398  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.309402  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.312741  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.313358  401591 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:00.313387  401591 pod_ready.go:82] duration metric: took 399.529803ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.313403  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.509443  401591 request.go:632] Waited for 195.933785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:10:00.509525  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:10:00.509534  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.509546  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.509553  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.513184  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.709406  401591 request.go:632] Waited for 195.365479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:00.709504  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:00.709514  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.709522  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.709529  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.713607  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:00.714279  401591 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:00.714309  401591 pod_ready.go:82] duration metric: took 400.896557ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.714325  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.909245  401591 request.go:632] Waited for 194.818143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:10:00.909342  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:10:00.909351  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.909364  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.909371  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.915481  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:01.109624  401591 request.go:632] Waited for 193.409101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:01.109691  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:01.109697  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.109705  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.109709  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.113699  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.114360  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.114385  401591 pod_ready.go:82] duration metric: took 400.050276ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.114400  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.309693  401591 request.go:632] Waited for 195.205987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:10:01.309795  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:10:01.309803  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.309815  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.309822  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.313815  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.508909  401591 request.go:632] Waited for 194.37677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:01.508986  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:01.508991  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.509002  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.509007  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.512742  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.513256  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.513276  401591 pod_ready.go:82] duration metric: took 398.86838ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.513288  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.709917  401591 request.go:632] Waited for 196.548883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:10:01.710017  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:10:01.710026  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.710034  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.710039  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.714122  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:01.909434  401591 request.go:632] Waited for 194.3948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:10:01.909513  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:10:01.909522  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.909532  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.909540  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.913611  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:01.914046  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.914070  401591 pod_ready.go:82] duration metric: took 400.775584ms for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.914081  401591 pod_ready.go:39] duration metric: took 5.201089226s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:10:01.914096  401591 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:10:01.914154  401591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:10:01.933363  401591 api_server.go:72] duration metric: took 20.994747532s to wait for apiserver process to appear ...
	I1007 12:10:01.933396  401591 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:10:01.933418  401591 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:10:01.938101  401591 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:10:01.938189  401591 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:10:01.938198  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.938207  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.938213  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.939122  401591 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:10:01.939199  401591 api_server.go:141] control plane version: v1.31.1
	I1007 12:10:01.939214  401591 api_server.go:131] duration metric: took 5.812529ms to wait for apiserver health ...
	I1007 12:10:01.939225  401591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:10:02.109608  401591 request.go:632] Waited for 170.278268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.109688  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.109696  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.109710  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.109721  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.116583  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:02.124470  401591 system_pods.go:59] 24 kube-system pods found
	I1007 12:10:02.124519  401591 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:10:02.124524  401591 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:10:02.124528  401591 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:10:02.124532  401591 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:10:02.124537  401591 system_pods.go:61] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:10:02.124541  401591 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:10:02.124545  401591 system_pods.go:61] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:10:02.124549  401591 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:10:02.124553  401591 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:10:02.124556  401591 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:10:02.124559  401591 system_pods.go:61] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:10:02.124563  401591 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:10:02.124566  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:10:02.124569  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:10:02.124572  401591 system_pods.go:61] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:10:02.124576  401591 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:10:02.124579  401591 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:10:02.124582  401591 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:10:02.124585  401591 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:10:02.124588  401591 system_pods.go:61] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:10:02.124591  401591 system_pods.go:61] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:10:02.124594  401591 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:10:02.124597  401591 system_pods.go:61] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:10:02.124600  401591 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:10:02.124608  401591 system_pods.go:74] duration metric: took 185.374126ms to wait for pod list to return data ...
	I1007 12:10:02.124621  401591 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:10:02.309914  401591 request.go:632] Waited for 185.18335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:10:02.309989  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:10:02.309995  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.310010  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.310017  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.318042  401591 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:10:02.318207  401591 default_sa.go:45] found service account: "default"
	I1007 12:10:02.318235  401591 default_sa.go:55] duration metric: took 193.599365ms for default service account to be created ...
	I1007 12:10:02.318250  401591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:10:02.509774  401591 request.go:632] Waited for 191.420927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.509840  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.509853  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.509866  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.509875  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.516685  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:02.523464  401591 system_pods.go:86] 24 kube-system pods found
	I1007 12:10:02.523503  401591 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:10:02.523511  401591 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:10:02.523516  401591 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:10:02.523522  401591 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:10:02.523528  401591 system_pods.go:89] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:10:02.523534  401591 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:10:02.523539  401591 system_pods.go:89] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:10:02.523573  401591 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:10:02.523579  401591 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:10:02.523585  401591 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:10:02.523591  401591 system_pods.go:89] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:10:02.523606  401591 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:10:02.523613  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:10:02.523619  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:10:02.523628  401591 system_pods.go:89] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:10:02.523634  401591 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:10:02.523640  401591 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:10:02.523651  401591 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:10:02.523657  401591 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:10:02.523662  401591 system_pods.go:89] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:10:02.523668  401591 system_pods.go:89] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:10:02.523674  401591 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:10:02.523679  401591 system_pods.go:89] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:10:02.523685  401591 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:10:02.523697  401591 system_pods.go:126] duration metric: took 205.439551ms to wait for k8s-apps to be running ...
	I1007 12:10:02.523709  401591 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:10:02.523771  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:10:02.542038  401591 system_svc.go:56] duration metric: took 18.318301ms WaitForService to wait for kubelet
	I1007 12:10:02.542084  401591 kubeadm.go:582] duration metric: took 21.603472414s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:10:02.542109  401591 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:10:02.709771  401591 request.go:632] Waited for 167.539386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:10:02.709854  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:10:02.709863  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.709874  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.709884  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.713363  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:02.714361  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714384  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714396  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714401  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714406  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714409  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714415  401591 node_conditions.go:105] duration metric: took 172.299605ms to run NodePressure ...
	I1007 12:10:02.714430  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:10:02.714459  401591 start.go:255] writing updated cluster config ...
	I1007 12:10:02.714781  401591 ssh_runner.go:195] Run: rm -f paused
	I1007 12:10:02.769817  401591 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:10:02.771879  401591 out.go:177] * Done! kubectl is now configured to use "ha-628553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.177307751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303229177283668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7176249a-2182-4352-a146-91902f641497 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.178206663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ac3819f-d817-4354-a893-19b5b6740b27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.178261699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ac3819f-d817-4354-a893-19b5b6740b27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.178564899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ac3819f-d817-4354-a893-19b5b6740b27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.228593037Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2b97a0db-eaeb-4142-86bb-c3acc83671fc name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.228689233Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2b97a0db-eaeb-4142-86bb-c3acc83671fc name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.229944017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74b19c30-ece0-4e7c-a837-0452f2bbb96c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.230396577Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303229230372759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74b19c30-ece0-4e7c-a837-0452f2bbb96c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.231098872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07957032-4c8f-4563-a5c3-80d7542831cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.231192308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07957032-4c8f-4563-a5c3-80d7542831cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.231484811Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07957032-4c8f-4563-a5c3-80d7542831cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.270220897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e8e9742-32fa-46e9-83e9-53b14c52122a name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.270293559Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e8e9742-32fa-46e9-83e9-53b14c52122a name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.271838912Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d8abdea-b7bf-43ab-a53f-80f9e01a2dea name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.272266584Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303229272243521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d8abdea-b7bf-43ab-a53f-80f9e01a2dea name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.272902666Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6985fb9-18df-41ca-bd23-b16194c4bf34 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.272979331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6985fb9-18df-41ca-bd23-b16194c4bf34 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.273214753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6985fb9-18df-41ca-bd23-b16194c4bf34 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.314358871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0e1de45-9119-4b7e-90c2-c8f474186721 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.314448734Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0e1de45-9119-4b7e-90c2-c8f474186721 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.315922653Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a57f6ae-0004-4313-90d8-4a3a2211c19d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.316355721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303229316331195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a57f6ae-0004-4313-90d8-4a3a2211c19d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.316979332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1a8a598-3d91-4660-afa5-433d2b6a48b4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.317033437Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1a8a598-3d91-4660-afa5-433d2b6a48b4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:49 ha-628553 crio[670]: time="2024-10-07 12:13:49.317254800Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1a8a598-3d91-4660-afa5-433d2b6a48b4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cac09519e9d83       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   3588af1ea926c       busybox-7dff88458-vc5k8
	914d5a55b5b7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   e4273414ae3c9       storage-provisioner
	4dcac83715ae5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   7a74be057c048       coredns-7c65d6cfc9-rsr6v
	0a438e52c0996       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   66f721a704d2d       coredns-7c65d6cfc9-ktmzq
	b10875321ed8d       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   883a1bf7435de       kindnet-snf5v
	4a0b203aaca5a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   4ad2a2a2eae50       kube-proxy-h6vg8
	41e1b6a866662       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   9107fefdb6eca       kube-vip-ha-628553
	02649d86a8d5c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   e611d474900bc       etcd-ha-628553
	1a3ce3a4cad16       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   adfc5c5b9565a       kube-scheduler-ha-628553
	73e39c7d2b39b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   ce8ef37c98c4f       kube-controller-manager-ha-628553
	919f5b2c17a09       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   923ba0f2be002       kube-apiserver-ha-628553
	
	
	==> coredns [0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68] <==
	[INFO] 10.244.1.2:59173 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004406792s
	[INFO] 10.244.1.2:44478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000424413s
	[INFO] 10.244.1.2:58960 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000183491s
	[INFO] 10.244.1.3:35630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000291506s
	[INFO] 10.244.1.3:42806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002399052s
	[INFO] 10.244.1.3:42397 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126644s
	[INFO] 10.244.1.3:34571 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001931949s
	[INFO] 10.244.1.3:54485 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000378487s
	[INFO] 10.244.1.3:58977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105091s
	[INFO] 10.244.0.4:38892 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002053345s
	[INFO] 10.244.0.4:58836 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172655s
	[INFO] 10.244.0.4:55251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065314s
	[INFO] 10.244.0.4:53436 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001570291s
	[INFO] 10.244.0.4:48063 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00004804s
	[INFO] 10.244.1.2:57025 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153957s
	[INFO] 10.244.1.2:40431 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012349s
	[INFO] 10.244.1.3:37153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139765s
	[INFO] 10.244.1.3:45214 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157416s
	[INFO] 10.244.1.3:47978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094264s
	[INFO] 10.244.0.4:57791 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080137s
	[INFO] 10.244.1.2:51888 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215918s
	[INFO] 10.244.1.2:42893 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166709s
	[INFO] 10.244.1.3:36056 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000172229s
	[INFO] 10.244.1.3:44744 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113708s
	[INFO] 10.244.0.4:56467 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102183s
	
	
	==> coredns [4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed] <==
	[INFO] 10.244.1.3:51613 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000585499s
	[INFO] 10.244.1.3:40629 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001993531s
	[INFO] 10.244.0.4:40285 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000080316s
	[INFO] 10.244.1.2:53385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200211s
	[INFO] 10.244.1.2:46841 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028903254s
	[INFO] 10.244.1.2:36156 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000295572s
	[INFO] 10.244.1.2:46979 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159813s
	[INFO] 10.244.1.3:47839 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190478s
	[INFO] 10.244.1.3:55618 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000314649s
	[INFO] 10.244.0.4:52728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150624s
	[INFO] 10.244.0.4:42394 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090784s
	[INFO] 10.244.0.4:57656 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107027s
	[INFO] 10.244.1.2:36030 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124775s
	[INFO] 10.244.1.2:57899 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082756s
	[INFO] 10.244.1.3:44889 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195326s
	[INFO] 10.244.0.4:59043 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137163s
	[INFO] 10.244.0.4:52080 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217774s
	[INFO] 10.244.0.4:40645 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102774s
	[INFO] 10.244.1.2:59521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150669s
	[INFO] 10.244.1.2:34929 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205398s
	[INFO] 10.244.1.3:50337 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185196s
	[INFO] 10.244.1.3:51645 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000242498s
	[INFO] 10.244.0.4:58847 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134448s
	[INFO] 10.244.0.4:51647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147028s
	[INFO] 10.244.0.4:54351 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131375s
	
	
	==> describe nodes <==
	Name:               ha-628553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_07_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-628553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a13f7b7982a74b9eb8f82488f9c3d1a6
	  System UUID:                a13f7b79-82a7-4b9e-b8f8-2488f9c3d1a6
	  Boot ID:                    288ea8ab-36c4-4d6a-9093-1f2ac800cc46
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vc5k8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 coredns-7c65d6cfc9-ktmzq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 coredns-7c65d6cfc9-rsr6v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 etcd-ha-628553                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m21s
	  kube-system                 kindnet-snf5v                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m17s
	  kube-system                 kube-apiserver-ha-628553             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-controller-manager-ha-628553    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-proxy-h6vg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-ha-628553             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-vip-ha-628553                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m16s  kube-proxy       
	  Normal  Starting                 6m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m21s  kubelet          Node ha-628553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s  kubelet          Node ha-628553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s  kubelet          Node ha-628553 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m18s  node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  NodeReady                6m5s   kubelet          Node ha-628553 status is now: NodeReady
	  Normal  RegisteredNode           5m18s  node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  RegisteredNode           4m4s   node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	
	
	Name:               ha-628553-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_08_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:08:22 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:11:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-628553-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ba9ae7572f54f4ab8de307b6e86da52
	  System UUID:                4ba9ae75-72f5-4f4a-b8de-307b6e86da52
	  Boot ID:                    30fbb024-4877-4642-abd8-af8d3d30f079
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-75ng4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  default                     busybox-7dff88458-jhmrp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-628553-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m25s
	  kube-system                 kindnet-9rq2w                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m27s
	  kube-system                 kube-apiserver-ha-628553-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-628553-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-proxy-s5c6d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-scheduler-ha-628553-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-vip-ha-628553-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m27s (x8 over 5m27s)  kubelet          Node ha-628553-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s (x8 over 5m27s)  kubelet          Node ha-628553-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x7 over 5m27s)  kubelet          Node ha-628553-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  NodeNotReady             113s                   node-controller  Node ha-628553-m02 status is now: NodeNotReady
	
	
	Name:               ha-628553-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_09_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:09:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-628553-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aab92960db1b4070940c89c6ff930351
	  System UUID:                aab92960-db1b-4070-940c-89c6ff930351
	  Boot ID:                    77629bba-9229-47e7-80cf-730097c43666
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-628553-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m10s
	  kube-system                 kindnet-sb4xd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m13s
	  kube-system                 kube-apiserver-ha-628553-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-ha-628553-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-956k4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-scheduler-ha-628553-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-vip-ha-628553-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-628553-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	
	
	Name:               ha-628553-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_10_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:10:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:11:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    ha-628553-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7e249f18a3f466abcbb6b94b02ed2ec
	  System UUID:                b7e249f1-8a3f-466a-bcbb-6b94b02ed2ec
	  Boot ID:                    dd833219-3ee8-4ed9-aae9-d441f250fa96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwk2r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m8s
	  kube-system                 kube-proxy-fkzqr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m8s (x2 over 3m8s)  kubelet          Node ha-628553-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x2 over 3m8s)  kubelet          Node ha-628553-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x2 over 3m8s)  kubelet          Node ha-628553-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m7s                 node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  NodeReady                2m48s                kubelet          Node ha-628553-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 12:06] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051409] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040490] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.878273] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.715451] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Oct 7 12:07] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.378547] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.061855] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066201] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.180086] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.153013] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.284998] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.180207] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +4.207557] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.061569] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.415206] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.085223] kauditd_printk_skb: 79 callbacks suppressed
	[  +4.998659] kauditd_printk_skb: 26 callbacks suppressed
	[ +12.170600] kauditd_printk_skb: 33 callbacks suppressed
	[Oct 7 12:08] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969] <==
	{"level":"warn","ts":"2024-10-07T12:13:49.595711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.603245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.608286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.612950Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.629152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.637100Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.644311Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.648342Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.651901Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.659218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.665690Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.672847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.677310Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.680950Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.686849Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.689854Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.693242Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.701940Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.705561Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.709419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.714453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.722512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.729694Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.733594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:49.790256Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:13:49 up 6 min,  0 users,  load average: 0.52, 0.30, 0.15
	Linux ha-628553 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e] <==
	I1007 12:13:14.287136       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:24.295723       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:24.295884       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:24.296132       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:24.296167       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:24.296254       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:24.296275       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:24.296365       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:24.296384       1 main.go:299] handling current node
	I1007 12:13:34.285463       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:34.285588       1 main.go:299] handling current node
	I1007 12:13:34.285620       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:34.285640       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:34.285850       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:34.285880       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:34.285943       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:34.285960       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:44.285393       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:44.285467       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:44.285666       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:44.285751       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:44.285880       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:44.285904       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:44.285950       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:44.285956       1 main.go:299] handling current node
	
	
	==> kube-apiserver [919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544] <==
	I1007 12:07:27.794940       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 12:07:27.933633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 12:07:32.075355       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1007 12:07:32.486677       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1007 12:08:23.102352       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.102586       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.764µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1007 12:08:23.104149       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.105567       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.106920       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.674679ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1007 12:10:08.360356       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40292: use of closed network connection
	E1007 12:10:08.561113       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40308: use of closed network connection
	E1007 12:10:08.787138       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40330: use of closed network connection
	E1007 12:10:09.028668       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40344: use of closed network connection
	E1007 12:10:09.244263       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40368: use of closed network connection
	E1007 12:10:09.466935       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40384: use of closed network connection
	E1007 12:10:09.660058       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40410: use of closed network connection
	E1007 12:10:09.852210       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40416: use of closed network connection
	E1007 12:10:10.061165       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40432: use of closed network connection
	E1007 12:10:10.408420       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40450: use of closed network connection
	E1007 12:10:10.612165       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40466: use of closed network connection
	E1007 12:10:10.805485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40472: use of closed network connection
	E1007 12:10:10.999177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40496: use of closed network connection
	E1007 12:10:11.210763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40502: use of closed network connection
	E1007 12:10:11.463496       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40532: use of closed network connection
	W1007 12:11:36.878261       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110 192.168.39.149]
	
	
	==> kube-controller-manager [73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee] <==
	I1007 12:10:41.965922       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.001526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.152486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.245459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.660674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:45.679644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:45.726419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:46.774324       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-628553-m04"
	I1007 12:10:46.775093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:46.796998       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:52.359490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:01.889908       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-628553-m04"
	I1007 12:11:01.891629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:01.908947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:02.079930       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:12.784052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:56.797865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-628553-m04"
	I1007 12:11:56.798196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:11:56.825210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:11:56.976985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.040351ms"
	I1007 12:11:56.977093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.478µs"
	I1007 12:11:57.005615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.252446ms"
	I1007 12:11:57.005705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.783µs"
	I1007 12:12:00.745939       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:12:02.094451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	
	
	==> kube-proxy [4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 12:07:33.298365       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 12:07:33.336456       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E1007 12:07:33.336571       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:07:33.434284       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 12:07:33.434331       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 12:07:33.434355       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:07:33.445592       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:07:33.454423       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:07:33.454444       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:07:33.463602       1 config.go:199] "Starting service config controller"
	I1007 12:07:33.467216       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:07:33.467268       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:07:33.467274       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:07:33.472850       1 config.go:328] "Starting node config controller"
	I1007 12:07:33.472863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:07:33.568004       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 12:07:33.568062       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:07:33.573613       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4] <==
	E1007 12:07:26.382246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:07:26.387024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 12:07:26.387119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:07:26.410415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 12:07:26.410570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 12:07:27.604975       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 12:10:03.714499       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="38d0a2a6-0d77-403c-86e7-405837d8ca25" pod="default/busybox-7dff88458-jhmrp" assumedNode="ha-628553-m02" currentNode="ha-628553-m03"
	E1007 12:10:03.740391       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jhmrp\": pod busybox-7dff88458-jhmrp is already assigned to node \"ha-628553-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jhmrp" node="ha-628553-m03"
	E1007 12:10:03.743143       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 38d0a2a6-0d77-403c-86e7-405837d8ca25(default/busybox-7dff88458-jhmrp) was assumed on ha-628553-m03 but assigned to ha-628553-m02" pod="default/busybox-7dff88458-jhmrp"
	E1007 12:10:03.745165       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jhmrp\": pod busybox-7dff88458-jhmrp is already assigned to node \"ha-628553-m02\"" pod="default/busybox-7dff88458-jhmrp"
	I1007 12:10:03.747831       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jhmrp" node="ha-628553-m02"
	E1007 12:10:03.791061       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vc5k8\": pod busybox-7dff88458-vc5k8 is already assigned to node \"ha-628553\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vc5k8" node="ha-628553-m03"
	E1007 12:10:03.791192       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vc5k8\": pod busybox-7dff88458-vc5k8 is already assigned to node \"ha-628553\"" pod="default/busybox-7dff88458-vc5k8"
	E1007 12:10:03.910449       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-47zsz\": pod busybox-7dff88458-47zsz is already assigned to node \"ha-628553-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-47zsz" node="ha-628553-m03"
	E1007 12:10:03.910515       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 674a626e-9fe6-4875-a34f-cc4d729e2bb1(default/busybox-7dff88458-47zsz) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-47zsz"
	E1007 12:10:03.910531       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-47zsz\": pod busybox-7dff88458-47zsz is already assigned to node \"ha-628553-m03\"" pod="default/busybox-7dff88458-47zsz"
	I1007 12:10:03.910555       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-47zsz" node="ha-628553-m03"
	E1007 12:10:42.040635       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwk2r\": pod kindnet-rwk2r is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwk2r" node="ha-628553-m04"
	E1007 12:10:42.042987       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwk2r\": pod kindnet-rwk2r is already assigned to node \"ha-628553-m04\"" pod="kube-system/kindnet-rwk2r"
	E1007 12:10:42.079633       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kl4j4\": pod kindnet-kl4j4 is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kl4j4" node="ha-628553-m04"
	E1007 12:10:42.079724       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 244c4da8-46b7-4627-a7ad-60e7ff405b0a(kube-system/kindnet-kl4j4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kl4j4"
	E1007 12:10:42.079846       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kl4j4\": pod kindnet-kl4j4 is already assigned to node \"ha-628553-m04\"" pod="kube-system/kindnet-kl4j4"
	I1007 12:10:42.079871       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kl4j4" node="ha-628553-m04"
	E1007 12:10:42.086167       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-g2fwp\": pod kube-proxy-g2fwp is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-g2fwp" node="ha-628553-m04"
	E1007 12:10:42.086272       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-g2fwp\": pod kube-proxy-g2fwp is already assigned to node \"ha-628553-m04\"" pod="kube-system/kube-proxy-g2fwp"
	
	
	==> kubelet <==
	Oct 07 12:12:27 ha-628553 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:12:27 ha-628553 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:12:28 ha-628553 kubelet[1314]: E1007 12:12:28.044744    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303148044534034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:28 ha-628553 kubelet[1314]: E1007 12:12:28.044838    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303148044534034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:38 ha-628553 kubelet[1314]: E1007 12:12:38.050523    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303158047005260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:38 ha-628553 kubelet[1314]: E1007 12:12:38.051561    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303158047005260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:48 ha-628553 kubelet[1314]: E1007 12:12:48.053900    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303168053449361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:48 ha-628553 kubelet[1314]: E1007 12:12:48.053963    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303168053449361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:58 ha-628553 kubelet[1314]: E1007 12:12:58.055856    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303178055537621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:58 ha-628553 kubelet[1314]: E1007 12:12:58.055895    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303178055537621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:08 ha-628553 kubelet[1314]: E1007 12:13:08.057102    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303188056723208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:08 ha-628553 kubelet[1314]: E1007 12:13:08.057351    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303188056723208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:18 ha-628553 kubelet[1314]: E1007 12:13:18.061478    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303198060609364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:18 ha-628553 kubelet[1314]: E1007 12:13:18.061853    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303198060609364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:27 ha-628553 kubelet[1314]: E1007 12:13:27.990111    1314 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:13:27 ha-628553 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:13:28 ha-628553 kubelet[1314]: E1007 12:13:28.063998    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303208063333958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:28 ha-628553 kubelet[1314]: E1007 12:13:28.064098    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303208063333958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:38 ha-628553 kubelet[1314]: E1007 12:13:38.066580    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303218065435839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:38 ha-628553 kubelet[1314]: E1007 12:13:38.066632    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303218065435839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:48 ha-628553 kubelet[1314]: E1007 12:13:48.067728    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303228067468647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:48 ha-628553 kubelet[1314]: E1007 12:13:48.067868    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303228067468647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-628553 -n ha-628553
helpers_test.go:261: (dbg) Run:  kubectl --context ha-628553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr: (3.790306868s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-628553 -n ha-628553
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-628553 logs -n 25: (1.497476552s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553:/home/docker/cp-test_ha-628553-m03_ha-628553.txt                       |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553 sudo cat                                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553.txt                                 |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m02:/home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m04 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp testdata/cp-test.txt                                                | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553:/home/docker/cp-test_ha-628553-m04_ha-628553.txt                       |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553 sudo cat                                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553.txt                                 |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m02:/home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03:/home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m03 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-628553 node stop m02 -v=7                                                     | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-628553 node start m02 -v=7                                                    | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:06:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:06:46.248953  401591 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:06:46.249102  401591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:46.249113  401591 out.go:358] Setting ErrFile to fd 2...
	I1007 12:06:46.249117  401591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:46.249326  401591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:06:46.249966  401591 out.go:352] Setting JSON to false
	I1007 12:06:46.250938  401591 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6552,"bootTime":1728296254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:06:46.251073  401591 start.go:139] virtualization: kvm guest
	I1007 12:06:46.253469  401591 out.go:177] * [ha-628553] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:06:46.255142  401591 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:06:46.255180  401591 notify.go:220] Checking for updates...
	I1007 12:06:46.257412  401591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:06:46.258630  401591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:06:46.259784  401591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.261129  401591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:06:46.262379  401591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:06:46.263655  401591 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:06:46.300943  401591 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 12:06:46.302472  401591 start.go:297] selected driver: kvm2
	I1007 12:06:46.302493  401591 start.go:901] validating driver "kvm2" against <nil>
	I1007 12:06:46.302513  401591 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:06:46.303566  401591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:06:46.303697  401591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:06:46.319358  401591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:06:46.319408  401591 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:06:46.319656  401591 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:06:46.319692  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:06:46.319741  401591 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 12:06:46.319766  401591 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 12:06:46.319825  401591 start.go:340] cluster config:
	{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 12:06:46.319936  401591 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:06:46.321805  401591 out.go:177] * Starting "ha-628553" primary control-plane node in "ha-628553" cluster
	I1007 12:06:46.323163  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:06:46.323208  401591 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:06:46.323219  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:06:46.323305  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:06:46.323316  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:06:46.323679  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:06:46.323704  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json: {Name:mk2a07965de558fa93dada604e58b87e56b9c04c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:06:46.323847  401591 start.go:360] acquireMachinesLock for ha-628553: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:06:46.323875  401591 start.go:364] duration metric: took 15.967µs to acquireMachinesLock for "ha-628553"
	I1007 12:06:46.323891  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:06:46.323965  401591 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 12:06:46.325764  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:06:46.325922  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:06:46.325971  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:06:46.341278  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I1007 12:06:46.341788  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:06:46.342304  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:06:46.342327  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:06:46.342728  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:06:46.342902  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:06:46.343093  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:06:46.343232  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:06:46.343262  401591 client.go:168] LocalClient.Create starting
	I1007 12:06:46.343300  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:06:46.343339  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:06:46.343361  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:06:46.343431  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:06:46.343449  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:06:46.343461  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:06:46.343477  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:06:46.343525  401591 main.go:141] libmachine: (ha-628553) Calling .PreCreateCheck
	I1007 12:06:46.343857  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:06:46.344200  401591 main.go:141] libmachine: Creating machine...
	I1007 12:06:46.344213  401591 main.go:141] libmachine: (ha-628553) Calling .Create
	I1007 12:06:46.344334  401591 main.go:141] libmachine: (ha-628553) Creating KVM machine...
	I1007 12:06:46.345527  401591 main.go:141] libmachine: (ha-628553) DBG | found existing default KVM network
	I1007 12:06:46.346242  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.346122  401614 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
	I1007 12:06:46.346346  401591 main.go:141] libmachine: (ha-628553) DBG | created network xml: 
	I1007 12:06:46.346370  401591 main.go:141] libmachine: (ha-628553) DBG | <network>
	I1007 12:06:46.346380  401591 main.go:141] libmachine: (ha-628553) DBG |   <name>mk-ha-628553</name>
	I1007 12:06:46.346391  401591 main.go:141] libmachine: (ha-628553) DBG |   <dns enable='no'/>
	I1007 12:06:46.346402  401591 main.go:141] libmachine: (ha-628553) DBG |   
	I1007 12:06:46.346407  401591 main.go:141] libmachine: (ha-628553) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 12:06:46.346415  401591 main.go:141] libmachine: (ha-628553) DBG |     <dhcp>
	I1007 12:06:46.346420  401591 main.go:141] libmachine: (ha-628553) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 12:06:46.346428  401591 main.go:141] libmachine: (ha-628553) DBG |     </dhcp>
	I1007 12:06:46.346439  401591 main.go:141] libmachine: (ha-628553) DBG |   </ip>
	I1007 12:06:46.346452  401591 main.go:141] libmachine: (ha-628553) DBG |   
	I1007 12:06:46.346459  401591 main.go:141] libmachine: (ha-628553) DBG | </network>
	I1007 12:06:46.346484  401591 main.go:141] libmachine: (ha-628553) DBG | 
	I1007 12:06:46.351921  401591 main.go:141] libmachine: (ha-628553) DBG | trying to create private KVM network mk-ha-628553 192.168.39.0/24...
	I1007 12:06:46.427414  401591 main.go:141] libmachine: (ha-628553) DBG | private KVM network mk-ha-628553 192.168.39.0/24 created
	I1007 12:06:46.427467  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.427375  401614 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.427482  401591 main.go:141] libmachine: (ha-628553) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 ...
	I1007 12:06:46.427511  401591 main.go:141] libmachine: (ha-628553) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:06:46.427534  401591 main.go:141] libmachine: (ha-628553) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:06:46.734984  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.734782  401614 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa...
	I1007 12:06:46.872452  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.872289  401614 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/ha-628553.rawdisk...
	I1007 12:06:46.872482  401591 main.go:141] libmachine: (ha-628553) DBG | Writing magic tar header
	I1007 12:06:46.872494  401591 main.go:141] libmachine: (ha-628553) DBG | Writing SSH key tar header
	I1007 12:06:46.872500  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.872414  401614 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 ...
	I1007 12:06:46.872528  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553
	I1007 12:06:46.872550  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 (perms=drwx------)
	I1007 12:06:46.872558  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:06:46.872571  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:06:46.872585  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.872599  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:06:46.872642  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:06:46.872667  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:06:46.872679  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:06:46.872704  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:06:46.872718  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home
	I1007 12:06:46.872731  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:06:46.872746  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:06:46.872756  401591 main.go:141] libmachine: (ha-628553) Creating domain...
	I1007 12:06:46.872770  401591 main.go:141] libmachine: (ha-628553) DBG | Skipping /home - not owner
	I1007 12:06:46.873981  401591 main.go:141] libmachine: (ha-628553) define libvirt domain using xml: 
	I1007 12:06:46.874013  401591 main.go:141] libmachine: (ha-628553) <domain type='kvm'>
	I1007 12:06:46.874020  401591 main.go:141] libmachine: (ha-628553)   <name>ha-628553</name>
	I1007 12:06:46.874024  401591 main.go:141] libmachine: (ha-628553)   <memory unit='MiB'>2200</memory>
	I1007 12:06:46.874029  401591 main.go:141] libmachine: (ha-628553)   <vcpu>2</vcpu>
	I1007 12:06:46.874033  401591 main.go:141] libmachine: (ha-628553)   <features>
	I1007 12:06:46.874038  401591 main.go:141] libmachine: (ha-628553)     <acpi/>
	I1007 12:06:46.874041  401591 main.go:141] libmachine: (ha-628553)     <apic/>
	I1007 12:06:46.874076  401591 main.go:141] libmachine: (ha-628553)     <pae/>
	I1007 12:06:46.874106  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874128  401591 main.go:141] libmachine: (ha-628553)   </features>
	I1007 12:06:46.874148  401591 main.go:141] libmachine: (ha-628553)   <cpu mode='host-passthrough'>
	I1007 12:06:46.874160  401591 main.go:141] libmachine: (ha-628553)   
	I1007 12:06:46.874169  401591 main.go:141] libmachine: (ha-628553)   </cpu>
	I1007 12:06:46.874177  401591 main.go:141] libmachine: (ha-628553)   <os>
	I1007 12:06:46.874184  401591 main.go:141] libmachine: (ha-628553)     <type>hvm</type>
	I1007 12:06:46.874189  401591 main.go:141] libmachine: (ha-628553)     <boot dev='cdrom'/>
	I1007 12:06:46.874195  401591 main.go:141] libmachine: (ha-628553)     <boot dev='hd'/>
	I1007 12:06:46.874201  401591 main.go:141] libmachine: (ha-628553)     <bootmenu enable='no'/>
	I1007 12:06:46.874209  401591 main.go:141] libmachine: (ha-628553)   </os>
	I1007 12:06:46.874217  401591 main.go:141] libmachine: (ha-628553)   <devices>
	I1007 12:06:46.874227  401591 main.go:141] libmachine: (ha-628553)     <disk type='file' device='cdrom'>
	I1007 12:06:46.874240  401591 main.go:141] libmachine: (ha-628553)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/boot2docker.iso'/>
	I1007 12:06:46.874254  401591 main.go:141] libmachine: (ha-628553)       <target dev='hdc' bus='scsi'/>
	I1007 12:06:46.874286  401591 main.go:141] libmachine: (ha-628553)       <readonly/>
	I1007 12:06:46.874302  401591 main.go:141] libmachine: (ha-628553)     </disk>
	I1007 12:06:46.874308  401591 main.go:141] libmachine: (ha-628553)     <disk type='file' device='disk'>
	I1007 12:06:46.874314  401591 main.go:141] libmachine: (ha-628553)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:06:46.874328  401591 main.go:141] libmachine: (ha-628553)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/ha-628553.rawdisk'/>
	I1007 12:06:46.874335  401591 main.go:141] libmachine: (ha-628553)       <target dev='hda' bus='virtio'/>
	I1007 12:06:46.874340  401591 main.go:141] libmachine: (ha-628553)     </disk>
	I1007 12:06:46.874346  401591 main.go:141] libmachine: (ha-628553)     <interface type='network'>
	I1007 12:06:46.874352  401591 main.go:141] libmachine: (ha-628553)       <source network='mk-ha-628553'/>
	I1007 12:06:46.874358  401591 main.go:141] libmachine: (ha-628553)       <model type='virtio'/>
	I1007 12:06:46.874363  401591 main.go:141] libmachine: (ha-628553)     </interface>
	I1007 12:06:46.874369  401591 main.go:141] libmachine: (ha-628553)     <interface type='network'>
	I1007 12:06:46.874375  401591 main.go:141] libmachine: (ha-628553)       <source network='default'/>
	I1007 12:06:46.874381  401591 main.go:141] libmachine: (ha-628553)       <model type='virtio'/>
	I1007 12:06:46.874386  401591 main.go:141] libmachine: (ha-628553)     </interface>
	I1007 12:06:46.874395  401591 main.go:141] libmachine: (ha-628553)     <serial type='pty'>
	I1007 12:06:46.874400  401591 main.go:141] libmachine: (ha-628553)       <target port='0'/>
	I1007 12:06:46.874409  401591 main.go:141] libmachine: (ha-628553)     </serial>
	I1007 12:06:46.874429  401591 main.go:141] libmachine: (ha-628553)     <console type='pty'>
	I1007 12:06:46.874446  401591 main.go:141] libmachine: (ha-628553)       <target type='serial' port='0'/>
	I1007 12:06:46.874474  401591 main.go:141] libmachine: (ha-628553)     </console>
	I1007 12:06:46.874484  401591 main.go:141] libmachine: (ha-628553)     <rng model='virtio'>
	I1007 12:06:46.874505  401591 main.go:141] libmachine: (ha-628553)       <backend model='random'>/dev/random</backend>
	I1007 12:06:46.874515  401591 main.go:141] libmachine: (ha-628553)     </rng>
	I1007 12:06:46.874526  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874539  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874559  401591 main.go:141] libmachine: (ha-628553)   </devices>
	I1007 12:06:46.874569  401591 main.go:141] libmachine: (ha-628553) </domain>
	I1007 12:06:46.874620  401591 main.go:141] libmachine: (ha-628553) 
	I1007 12:06:46.879724  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:6a:a7:e1 in network default
	I1007 12:06:46.880361  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:46.880382  401591 main.go:141] libmachine: (ha-628553) Ensuring networks are active...
	I1007 12:06:46.881257  401591 main.go:141] libmachine: (ha-628553) Ensuring network default is active
	I1007 12:06:46.881675  401591 main.go:141] libmachine: (ha-628553) Ensuring network mk-ha-628553 is active
	I1007 12:06:46.882336  401591 main.go:141] libmachine: (ha-628553) Getting domain xml...
	I1007 12:06:46.883247  401591 main.go:141] libmachine: (ha-628553) Creating domain...
	I1007 12:06:48.123283  401591 main.go:141] libmachine: (ha-628553) Waiting to get IP...
	I1007 12:06:48.124056  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.124511  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.124563  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.124510  401614 retry.go:31] will retry after 252.804778ms: waiting for machine to come up
	I1007 12:06:48.379035  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.379469  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.379489  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.379438  401614 retry.go:31] will retry after 356.807953ms: waiting for machine to come up
	I1007 12:06:48.738267  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.738722  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.738745  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.738688  401614 retry.go:31] will retry after 447.95167ms: waiting for machine to come up
	I1007 12:06:49.188519  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:49.188950  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:49.189019  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:49.188950  401614 retry.go:31] will retry after 486.200273ms: waiting for machine to come up
	I1007 12:06:49.676646  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:49.677063  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:49.677096  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:49.677017  401614 retry.go:31] will retry after 751.80427ms: waiting for machine to come up
	I1007 12:06:50.430789  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:50.431237  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:50.431260  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:50.431198  401614 retry.go:31] will retry after 897.786106ms: waiting for machine to come up
	I1007 12:06:51.330467  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:51.330831  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:51.330901  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:51.330836  401614 retry.go:31] will retry after 793.545437ms: waiting for machine to come up
	I1007 12:06:52.125725  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:52.126243  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:52.126280  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:52.126156  401614 retry.go:31] will retry after 986.036634ms: waiting for machine to come up
	I1007 12:06:53.113559  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:53.113953  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:53.113997  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:53.113901  401614 retry.go:31] will retry after 1.340335374s: waiting for machine to come up
	I1007 12:06:54.456245  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:54.456708  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:54.456732  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:54.456674  401614 retry.go:31] will retry after 1.447575739s: waiting for machine to come up
	I1007 12:06:55.906303  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:55.906806  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:55.906840  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:55.906747  401614 retry.go:31] will retry after 2.291446715s: waiting for machine to come up
	I1007 12:06:58.200323  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:58.200867  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:58.200896  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:58.200813  401614 retry.go:31] will retry after 2.450660794s: waiting for machine to come up
	I1007 12:07:00.654450  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:00.655019  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:07:00.655050  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:07:00.654943  401614 retry.go:31] will retry after 4.454613315s: waiting for machine to come up
	I1007 12:07:05.114240  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:05.114649  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:07:05.114678  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:07:05.114610  401614 retry.go:31] will retry after 4.13354174s: waiting for machine to come up
	I1007 12:07:09.251818  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.252270  401591 main.go:141] libmachine: (ha-628553) Found IP for machine: 192.168.39.110
	I1007 12:07:09.252297  401591 main.go:141] libmachine: (ha-628553) Reserving static IP address...
	I1007 12:07:09.252306  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has current primary IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.252723  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find host DHCP lease matching {name: "ha-628553", mac: "52:54:00:7b:12:fd", ip: "192.168.39.110"} in network mk-ha-628553
	I1007 12:07:09.328075  401591 main.go:141] libmachine: (ha-628553) DBG | Getting to WaitForSSH function...
	I1007 12:07:09.328108  401591 main.go:141] libmachine: (ha-628553) Reserved static IP address: 192.168.39.110
	I1007 12:07:09.328119  401591 main.go:141] libmachine: (ha-628553) Waiting for SSH to be available...
	I1007 12:07:09.330775  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.331429  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.331468  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.331645  401591 main.go:141] libmachine: (ha-628553) DBG | Using SSH client type: external
	I1007 12:07:09.331670  401591 main.go:141] libmachine: (ha-628553) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa (-rw-------)
	I1007 12:07:09.331710  401591 main.go:141] libmachine: (ha-628553) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:07:09.331724  401591 main.go:141] libmachine: (ha-628553) DBG | About to run SSH command:
	I1007 12:07:09.331736  401591 main.go:141] libmachine: (ha-628553) DBG | exit 0
	I1007 12:07:09.455242  401591 main.go:141] libmachine: (ha-628553) DBG | SSH cmd err, output: <nil>: 
	I1007 12:07:09.455632  401591 main.go:141] libmachine: (ha-628553) KVM machine creation complete!
	I1007 12:07:09.455937  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:07:09.456561  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:09.456802  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:09.457023  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:07:09.457043  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:09.458370  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:07:09.458386  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:07:09.458404  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:07:09.458413  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.460807  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.461171  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.461207  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.461300  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.461468  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.461645  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.461780  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.461919  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.462158  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.462173  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:07:09.562645  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:09.562687  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:07:09.562725  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.565649  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.565971  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.566008  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.566176  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.566388  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.566561  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.566676  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.566830  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.567082  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.567099  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:07:09.667847  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:07:09.667941  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:07:09.667948  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:07:09.667957  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.668229  401591 buildroot.go:166] provisioning hostname "ha-628553"
	I1007 12:07:09.668263  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.668471  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.671034  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.671389  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.671427  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.671579  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.671743  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.671923  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.672060  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.672217  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.672404  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.672417  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553 && echo "ha-628553" | sudo tee /etc/hostname
	I1007 12:07:09.786631  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553
	
	I1007 12:07:09.786665  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.789427  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.789744  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.789774  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.789989  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.790273  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.790426  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.790549  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.790707  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.790919  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.790942  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:07:09.900194  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:09.900232  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:07:09.900296  401591 buildroot.go:174] setting up certificates
	I1007 12:07:09.900321  401591 provision.go:84] configureAuth start
	I1007 12:07:09.900343  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.900684  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:09.903579  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.904022  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.904048  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.904222  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.906311  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.906630  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.906658  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.906830  401591 provision.go:143] copyHostCerts
	I1007 12:07:09.906874  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:09.906920  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:07:09.906937  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:09.907109  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:07:09.907203  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:09.907224  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:07:09.907232  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:09.907258  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:07:09.907319  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:09.907341  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:07:09.907348  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:09.907368  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:07:09.907427  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553 san=[127.0.0.1 192.168.39.110 ha-628553 localhost minikube]
	I1007 12:07:09.982701  401591 provision.go:177] copyRemoteCerts
	I1007 12:07:09.982771  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:07:09.982796  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.985547  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.985859  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.985888  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.986044  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.986244  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.986399  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.986506  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.070065  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:07:10.070156  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:07:10.096714  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:07:10.096790  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 12:07:10.123505  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:07:10.123591  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:07:10.149487  401591 provision.go:87] duration metric: took 249.146606ms to configureAuth
	I1007 12:07:10.149524  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:07:10.149723  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:10.149836  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.152585  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.152880  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.152910  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.153069  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.153241  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.153400  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.153553  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.153691  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:10.153888  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:10.153903  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:07:10.373356  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:07:10.373398  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:07:10.373429  401591 main.go:141] libmachine: (ha-628553) Calling .GetURL
	I1007 12:07:10.374673  401591 main.go:141] libmachine: (ha-628553) DBG | Using libvirt version 6000000
	I1007 12:07:10.376989  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.377347  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.377371  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.377519  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:07:10.377531  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:07:10.377548  401591 client.go:171] duration metric: took 24.034266127s to LocalClient.Create
	I1007 12:07:10.377571  401591 start.go:167] duration metric: took 24.034341329s to libmachine.API.Create "ha-628553"
	I1007 12:07:10.377581  401591 start.go:293] postStartSetup for "ha-628553" (driver="kvm2")
	I1007 12:07:10.377593  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:07:10.377610  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.377871  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:07:10.377899  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.380000  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.380320  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.380343  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.380475  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.380648  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.380799  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.380960  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.461919  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:07:10.466913  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:07:10.466951  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:07:10.467055  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:07:10.467179  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:07:10.467195  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:07:10.467315  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:07:10.478269  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:10.503960  401591 start.go:296] duration metric: took 126.358927ms for postStartSetup
	I1007 12:07:10.504030  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:07:10.504699  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:10.507315  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.507612  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.507660  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.507956  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:10.508187  401591 start.go:128] duration metric: took 24.184210305s to createHost
	I1007 12:07:10.508226  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.510480  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.510789  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.510822  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.511033  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.511256  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.511415  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.511573  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.511733  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:10.511905  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:10.511924  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:07:10.611827  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302830.585700119
	
	I1007 12:07:10.611860  401591 fix.go:216] guest clock: 1728302830.585700119
	I1007 12:07:10.611870  401591 fix.go:229] Guest: 2024-10-07 12:07:10.585700119 +0000 UTC Remote: 2024-10-07 12:07:10.508202357 +0000 UTC m=+24.300236101 (delta=77.497762ms)
	I1007 12:07:10.611911  401591 fix.go:200] guest clock delta is within tolerance: 77.497762ms
	I1007 12:07:10.611917  401591 start.go:83] releasing machines lock for "ha-628553", held for 24.288033555s
	I1007 12:07:10.611944  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.612216  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:10.614566  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.614868  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.614895  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.615083  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.615721  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.615950  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.616059  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:07:10.616101  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.616157  401591 ssh_runner.go:195] Run: cat /version.json
	I1007 12:07:10.616184  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.618780  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.618978  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619174  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.619193  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619348  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.619390  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619659  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.619672  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.619840  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.619847  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.620016  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.620024  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.620177  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.620181  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.718502  401591 ssh_runner.go:195] Run: systemctl --version
	I1007 12:07:10.724799  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:07:10.886272  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:07:10.893483  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:07:10.893578  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:07:10.909850  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:07:10.909880  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:07:10.909961  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:07:10.926247  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:07:10.941251  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:07:10.941339  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:07:10.955771  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:07:10.969831  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:07:11.084350  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:07:11.233191  401591 docker.go:233] disabling docker service ...
	I1007 12:07:11.233261  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:07:11.257607  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:07:11.272121  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:07:11.404315  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:07:11.544026  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:07:11.559395  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:07:11.580516  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:07:11.580580  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.592830  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:07:11.592905  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.604197  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.615375  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.626652  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:07:11.638161  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.649289  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.668010  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.679654  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:07:11.690371  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:07:11.690448  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:07:11.704718  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:07:11.715762  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:11.825411  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:07:11.918378  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:07:11.918470  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:07:11.923527  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:07:11.923612  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:07:11.927764  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:07:11.977811  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:07:11.977922  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:07:12.007918  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:07:12.039043  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:07:12.040655  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:12.043258  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:12.043618  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:12.043660  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:12.043867  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:07:12.048464  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:07:12.062293  401591 kubeadm.go:883] updating cluster {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:07:12.062486  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:07:12.062597  401591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:07:12.097470  401591 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:07:12.097555  401591 ssh_runner.go:195] Run: which lz4
	I1007 12:07:12.101992  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 12:07:12.102107  401591 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:07:12.106769  401591 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:07:12.106815  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:07:13.549777  401591 crio.go:462] duration metric: took 1.447693523s to copy over tarball
	I1007 12:07:13.549867  401591 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:07:15.620966  401591 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.071058726s)
	I1007 12:07:15.621003  401591 crio.go:469] duration metric: took 2.071194203s to extract the tarball
	I1007 12:07:15.621015  401591 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:07:15.659036  401591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:07:15.704438  401591 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:07:15.704468  401591 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:07:15.704477  401591 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.31.1 crio true true} ...
	I1007 12:07:15.704607  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:07:15.704694  401591 ssh_runner.go:195] Run: crio config
	I1007 12:07:15.754734  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:07:15.754757  401591 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:07:15.754770  401591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:07:15.754796  401591 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-628553 NodeName:ha-628553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:07:15.754985  401591 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-628553"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:07:15.755023  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:07:15.755081  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:07:15.772386  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:07:15.772511  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:07:15.772569  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:07:15.783117  401591 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:07:15.783206  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:07:15.793430  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:07:15.811520  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:07:15.829402  401591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:07:15.846802  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 12:07:15.864215  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:07:15.868441  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:07:15.881667  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:16.004989  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:07:16.023767  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.110
	I1007 12:07:16.023798  401591 certs.go:194] generating shared ca certs ...
	I1007 12:07:16.023817  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.023995  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:07:16.024043  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:07:16.024055  401591 certs.go:256] generating profile certs ...
	I1007 12:07:16.024128  401591 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:07:16.024144  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt with IP's: []
	I1007 12:07:16.480073  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt ...
	I1007 12:07:16.480107  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt: {Name:mkfb027cfd899ceeb19712c80d47ef46bbe4c190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.480291  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key ...
	I1007 12:07:16.480303  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key: {Name:mk472c4daf268a3e203f7108e0ee108260fa3747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.480379  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105
	I1007 12:07:16.480394  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I1007 12:07:16.560831  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 ...
	I1007 12:07:16.560865  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105: {Name:mkda56599207690099e4c299c085dc0644ef658a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.561026  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105 ...
	I1007 12:07:16.561038  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105: {Name:mk95b3f2a966eb67f31cfddf5b506b130fe9bd62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.561111  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:07:16.561219  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:07:16.561278  401591 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:07:16.561293  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt with IP's: []
	I1007 12:07:16.724627  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt ...
	I1007 12:07:16.724663  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt: {Name:mka4b333091a10b550ae6d13ed243d08adf6256b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.724831  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key ...
	I1007 12:07:16.724852  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key: {Name:mk6b2bcdf33ba7c4b6b9286fdc19a9d76a966caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.724932  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:07:16.724949  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:07:16.724963  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:07:16.724977  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:07:16.724990  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:07:16.725004  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:07:16.725016  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:07:16.725028  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:07:16.725075  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:07:16.725108  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:07:16.725118  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:07:16.725153  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:07:16.725179  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:07:16.725216  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:07:16.725253  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:16.725329  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:07:16.725350  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:07:16.725362  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:16.726018  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:07:16.753427  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:07:16.781404  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:07:16.817294  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:07:16.847559  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:07:16.873440  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:07:16.900479  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:07:16.927096  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:07:16.955843  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:07:16.983339  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:07:17.013360  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:07:17.041294  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:07:17.061373  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:07:17.067955  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:07:17.081953  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.087146  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.087222  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.094009  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:07:17.108332  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:07:17.122877  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.128622  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.128708  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.136010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:07:17.150544  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:07:17.165028  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.170897  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.170982  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.177949  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:07:17.192554  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:07:17.197582  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:07:17.197639  401591 kubeadm.go:392] StartCluster: {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:07:17.197720  401591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:07:17.197783  401591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:07:17.244966  401591 cri.go:89] found id: ""
	I1007 12:07:17.245041  401591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:07:17.257993  401591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:07:17.270516  401591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:07:17.282873  401591 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:07:17.282897  401591 kubeadm.go:157] found existing configuration files:
	
	I1007 12:07:17.282953  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:07:17.293921  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:07:17.294014  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:07:17.305489  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:07:17.315800  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:07:17.315863  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:07:17.326391  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:07:17.336609  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:07:17.336691  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:07:17.347761  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:07:17.358288  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:07:17.358369  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:07:17.369688  401591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 12:07:17.494169  401591 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 12:07:17.494284  401591 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 12:07:17.626708  401591 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 12:07:17.626813  401591 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 12:07:17.626906  401591 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 12:07:17.639261  401591 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 12:07:17.853154  401591 out.go:235]   - Generating certificates and keys ...
	I1007 12:07:17.853313  401591 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 12:07:17.853396  401591 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 12:07:17.853510  401591 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 12:07:17.853594  401591 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 12:07:18.070639  401591 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 12:07:18.133955  401591 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 12:07:18.493727  401591 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 12:07:18.493854  401591 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-628553 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1007 12:07:18.624521  401591 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 12:07:18.624725  401591 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-628553 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1007 12:07:18.772457  401591 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 12:07:19.133450  401591 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 12:07:19.279063  401591 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 12:07:19.279188  401591 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 12:07:19.348410  401591 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 12:07:19.574804  401591 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 12:07:19.645430  401591 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 12:07:19.894630  401591 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 12:07:20.065666  401591 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 12:07:20.066298  401591 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 12:07:20.071555  401591 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 12:07:20.073562  401591 out.go:235]   - Booting up control plane ...
	I1007 12:07:20.073670  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 12:07:20.073742  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 12:07:20.073803  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 12:07:20.089334  401591 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 12:07:20.096504  401591 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 12:07:20.096582  401591 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 12:07:20.238757  401591 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 12:07:20.238922  401591 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 12:07:21.247383  401591 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.007919898s
	I1007 12:07:21.247485  401591 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 12:07:26.913696  401591 kubeadm.go:310] [api-check] The API server is healthy after 5.671139192s
	I1007 12:07:26.932589  401591 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 12:07:26.948791  401591 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 12:07:27.494371  401591 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 12:07:27.494637  401591 kubeadm.go:310] [mark-control-plane] Marking the node ha-628553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 12:07:27.512639  401591 kubeadm.go:310] [bootstrap-token] Using token: jd5sg7.ynaw0s6f9h2yr29w
	I1007 12:07:27.514508  401591 out.go:235]   - Configuring RBAC rules ...
	I1007 12:07:27.514678  401591 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 12:07:27.527273  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 12:07:27.537651  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 12:07:27.542026  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 12:07:27.545879  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 12:07:27.550174  401591 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 12:07:27.568355  401591 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 12:07:27.807712  401591 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 12:07:28.321610  401591 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 12:07:28.321657  401591 kubeadm.go:310] 
	I1007 12:07:28.321720  401591 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 12:07:28.321728  401591 kubeadm.go:310] 
	I1007 12:07:28.321852  401591 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 12:07:28.321870  401591 kubeadm.go:310] 
	I1007 12:07:28.321904  401591 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 12:07:28.321987  401591 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 12:07:28.322064  401591 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 12:07:28.322074  401591 kubeadm.go:310] 
	I1007 12:07:28.322155  401591 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 12:07:28.322171  401591 kubeadm.go:310] 
	I1007 12:07:28.322225  401591 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 12:07:28.322234  401591 kubeadm.go:310] 
	I1007 12:07:28.322293  401591 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 12:07:28.322386  401591 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 12:07:28.322471  401591 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 12:07:28.322481  401591 kubeadm.go:310] 
	I1007 12:07:28.322608  401591 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 12:07:28.322677  401591 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 12:07:28.322684  401591 kubeadm.go:310] 
	I1007 12:07:28.322753  401591 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jd5sg7.ynaw0s6f9h2yr29w \
	I1007 12:07:28.322898  401591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 \
	I1007 12:07:28.322931  401591 kubeadm.go:310] 	--control-plane 
	I1007 12:07:28.322941  401591 kubeadm.go:310] 
	I1007 12:07:28.323057  401591 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 12:07:28.323067  401591 kubeadm.go:310] 
	I1007 12:07:28.323165  401591 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jd5sg7.ynaw0s6f9h2yr29w \
	I1007 12:07:28.323318  401591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 
	I1007 12:07:28.324193  401591 kubeadm.go:310] W1007 12:07:17.473376     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:07:28.324456  401591 kubeadm.go:310] W1007 12:07:17.474417     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:07:28.324568  401591 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:07:28.324604  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:07:28.324616  401591 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:07:28.326463  401591 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 12:07:28.327680  401591 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 12:07:28.333563  401591 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 12:07:28.333587  401591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 12:07:28.357058  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 12:07:28.763710  401591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:07:28.763800  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:28.763837  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553 minikube.k8s.io/updated_at=2024_10_07T12_07_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=true
	I1007 12:07:28.789823  401591 ops.go:34] apiserver oom_adj: -16
	I1007 12:07:28.939139  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:29.440288  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:29.939479  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:30.440099  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:30.940243  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:31.439830  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:31.939544  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:32.439274  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:32.691661  401591 kubeadm.go:1113] duration metric: took 3.927936335s to wait for elevateKubeSystemPrivileges
	I1007 12:07:32.691702  401591 kubeadm.go:394] duration metric: took 15.494065691s to StartCluster
	I1007 12:07:32.691720  401591 settings.go:142] acquiring lock: {Name:mk1ff033f29b570679652ae5ee30e0799b0658dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:32.691805  401591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:07:32.694409  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:32.695052  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 12:07:32.695056  401591 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:07:32.695093  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:07:32.695116  401591 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:07:32.695224  401591 addons.go:69] Setting storage-provisioner=true in profile "ha-628553"
	I1007 12:07:32.695233  401591 addons.go:69] Setting default-storageclass=true in profile "ha-628553"
	I1007 12:07:32.695246  401591 addons.go:234] Setting addon storage-provisioner=true in "ha-628553"
	I1007 12:07:32.695276  401591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-628553"
	I1007 12:07:32.695321  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:32.695278  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:07:32.695828  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.695856  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.695880  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.695904  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.713283  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I1007 12:07:32.713330  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I1007 12:07:32.713795  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.713821  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.714372  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.714404  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.714470  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.714495  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.714860  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.714918  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.715087  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.715613  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.715671  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.717649  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:07:32.717950  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:07:32.718459  401591 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 12:07:32.718801  401591 addons.go:234] Setting addon default-storageclass=true in "ha-628553"
	I1007 12:07:32.718846  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:07:32.719253  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.719305  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.733464  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45313
	I1007 12:07:32.734011  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.734570  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.734597  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.734946  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.735147  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.736496  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I1007 12:07:32.736815  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:32.737247  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.737699  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.737724  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.738090  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.738558  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.738606  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.739129  401591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:07:32.740633  401591 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:07:32.740659  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:07:32.740683  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:32.744392  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.744885  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:32.744914  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.745085  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:32.745311  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:32.745493  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:32.745635  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:32.755450  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I1007 12:07:32.756180  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.756775  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.756839  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.757215  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.757439  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.759112  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:32.759361  401591 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:07:32.759380  401591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:07:32.759399  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:32.761925  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.762241  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:32.762266  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.762381  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:32.762573  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:32.762681  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:32.762803  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:32.893511  401591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:07:32.927665  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 12:07:32.930086  401591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:07:33.749725  401591 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 12:07:33.749834  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.749857  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750070  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750085  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750150  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.750183  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750217  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750228  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750239  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750364  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750400  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750412  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750420  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750560  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.750625  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750637  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750639  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750662  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750758  401591 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:07:33.750779  401591 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:07:33.750910  401591 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 12:07:33.750920  401591 round_trippers.go:469] Request Headers:
	I1007 12:07:33.750933  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:07:33.750938  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:07:33.762601  401591 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:07:33.763351  401591 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 12:07:33.763370  401591 round_trippers.go:469] Request Headers:
	I1007 12:07:33.763378  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:07:33.763383  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:07:33.763386  401591 round_trippers.go:473]     Content-Type: application/json
	I1007 12:07:33.766118  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:07:33.766300  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.766313  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.766629  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.766646  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.766684  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.768511  401591 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 12:07:33.770162  401591 addons.go:510] duration metric: took 1.075047661s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 12:07:33.770212  401591 start.go:246] waiting for cluster config update ...
	I1007 12:07:33.770227  401591 start.go:255] writing updated cluster config ...
	I1007 12:07:33.772026  401591 out.go:201] 
	I1007 12:07:33.773570  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:33.773647  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:33.775167  401591 out.go:177] * Starting "ha-628553-m02" control-plane node in "ha-628553" cluster
	I1007 12:07:33.776386  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:07:33.776419  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:07:33.776564  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:07:33.776577  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:07:33.776670  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:33.776889  401591 start.go:360] acquireMachinesLock for ha-628553-m02: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:07:33.776949  401591 start.go:364] duration metric: took 33.552µs to acquireMachinesLock for "ha-628553-m02"
	I1007 12:07:33.776978  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:07:33.777088  401591 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 12:07:33.779624  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:07:33.779742  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:33.779791  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:33.795004  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I1007 12:07:33.795415  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:33.795909  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:33.795931  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:33.796264  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:33.796498  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:33.796628  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:33.796770  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:07:33.796805  401591 client.go:168] LocalClient.Create starting
	I1007 12:07:33.796847  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:07:33.796894  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:07:33.796911  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:07:33.796968  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:07:33.796987  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:07:33.796997  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:07:33.797015  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:07:33.797023  401591 main.go:141] libmachine: (ha-628553-m02) Calling .PreCreateCheck
	I1007 12:07:33.797222  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:33.797700  401591 main.go:141] libmachine: Creating machine...
	I1007 12:07:33.797714  401591 main.go:141] libmachine: (ha-628553-m02) Calling .Create
	I1007 12:07:33.797891  401591 main.go:141] libmachine: (ha-628553-m02) Creating KVM machine...
	I1007 12:07:33.799094  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found existing default KVM network
	I1007 12:07:33.799243  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found existing private KVM network mk-ha-628553
	I1007 12:07:33.799364  401591 main.go:141] libmachine: (ha-628553-m02) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 ...
	I1007 12:07:33.799377  401591 main.go:141] libmachine: (ha-628553-m02) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:07:33.799477  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:33.799367  401944 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:07:33.799603  401591 main.go:141] libmachine: (ha-628553-m02) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:07:34.069404  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.069235  401944 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa...
	I1007 12:07:34.176325  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.176157  401944 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/ha-628553-m02.rawdisk...
	I1007 12:07:34.176359  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Writing magic tar header
	I1007 12:07:34.176372  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Writing SSH key tar header
	I1007 12:07:34.176384  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.176303  401944 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 ...
	I1007 12:07:34.176398  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02
	I1007 12:07:34.176501  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:07:34.176544  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:07:34.176555  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 (perms=drwx------)
	I1007 12:07:34.176567  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:07:34.176576  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:07:34.176583  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:07:34.176594  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:07:34.176609  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:07:34.176622  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:07:34.176635  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:07:34.176651  401591 main.go:141] libmachine: (ha-628553-m02) Creating domain...
	I1007 12:07:34.176660  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:07:34.176668  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home
	I1007 12:07:34.176675  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Skipping /home - not owner
	I1007 12:07:34.177701  401591 main.go:141] libmachine: (ha-628553-m02) define libvirt domain using xml: 
	I1007 12:07:34.177730  401591 main.go:141] libmachine: (ha-628553-m02) <domain type='kvm'>
	I1007 12:07:34.177740  401591 main.go:141] libmachine: (ha-628553-m02)   <name>ha-628553-m02</name>
	I1007 12:07:34.177751  401591 main.go:141] libmachine: (ha-628553-m02)   <memory unit='MiB'>2200</memory>
	I1007 12:07:34.177759  401591 main.go:141] libmachine: (ha-628553-m02)   <vcpu>2</vcpu>
	I1007 12:07:34.177766  401591 main.go:141] libmachine: (ha-628553-m02)   <features>
	I1007 12:07:34.177777  401591 main.go:141] libmachine: (ha-628553-m02)     <acpi/>
	I1007 12:07:34.177786  401591 main.go:141] libmachine: (ha-628553-m02)     <apic/>
	I1007 12:07:34.177796  401591 main.go:141] libmachine: (ha-628553-m02)     <pae/>
	I1007 12:07:34.177809  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.177820  401591 main.go:141] libmachine: (ha-628553-m02)   </features>
	I1007 12:07:34.177834  401591 main.go:141] libmachine: (ha-628553-m02)   <cpu mode='host-passthrough'>
	I1007 12:07:34.177844  401591 main.go:141] libmachine: (ha-628553-m02)   
	I1007 12:07:34.177853  401591 main.go:141] libmachine: (ha-628553-m02)   </cpu>
	I1007 12:07:34.177864  401591 main.go:141] libmachine: (ha-628553-m02)   <os>
	I1007 12:07:34.177870  401591 main.go:141] libmachine: (ha-628553-m02)     <type>hvm</type>
	I1007 12:07:34.177876  401591 main.go:141] libmachine: (ha-628553-m02)     <boot dev='cdrom'/>
	I1007 12:07:34.177883  401591 main.go:141] libmachine: (ha-628553-m02)     <boot dev='hd'/>
	I1007 12:07:34.177888  401591 main.go:141] libmachine: (ha-628553-m02)     <bootmenu enable='no'/>
	I1007 12:07:34.177895  401591 main.go:141] libmachine: (ha-628553-m02)   </os>
	I1007 12:07:34.177900  401591 main.go:141] libmachine: (ha-628553-m02)   <devices>
	I1007 12:07:34.177910  401591 main.go:141] libmachine: (ha-628553-m02)     <disk type='file' device='cdrom'>
	I1007 12:07:34.177952  401591 main.go:141] libmachine: (ha-628553-m02)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/boot2docker.iso'/>
	I1007 12:07:34.177981  401591 main.go:141] libmachine: (ha-628553-m02)       <target dev='hdc' bus='scsi'/>
	I1007 12:07:34.177992  401591 main.go:141] libmachine: (ha-628553-m02)       <readonly/>
	I1007 12:07:34.178002  401591 main.go:141] libmachine: (ha-628553-m02)     </disk>
	I1007 12:07:34.178015  401591 main.go:141] libmachine: (ha-628553-m02)     <disk type='file' device='disk'>
	I1007 12:07:34.178028  401591 main.go:141] libmachine: (ha-628553-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:07:34.178044  401591 main.go:141] libmachine: (ha-628553-m02)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/ha-628553-m02.rawdisk'/>
	I1007 12:07:34.178055  401591 main.go:141] libmachine: (ha-628553-m02)       <target dev='hda' bus='virtio'/>
	I1007 12:07:34.178066  401591 main.go:141] libmachine: (ha-628553-m02)     </disk>
	I1007 12:07:34.178073  401591 main.go:141] libmachine: (ha-628553-m02)     <interface type='network'>
	I1007 12:07:34.178085  401591 main.go:141] libmachine: (ha-628553-m02)       <source network='mk-ha-628553'/>
	I1007 12:07:34.178102  401591 main.go:141] libmachine: (ha-628553-m02)       <model type='virtio'/>
	I1007 12:07:34.178114  401591 main.go:141] libmachine: (ha-628553-m02)     </interface>
	I1007 12:07:34.178125  401591 main.go:141] libmachine: (ha-628553-m02)     <interface type='network'>
	I1007 12:07:34.178138  401591 main.go:141] libmachine: (ha-628553-m02)       <source network='default'/>
	I1007 12:07:34.178148  401591 main.go:141] libmachine: (ha-628553-m02)       <model type='virtio'/>
	I1007 12:07:34.178157  401591 main.go:141] libmachine: (ha-628553-m02)     </interface>
	I1007 12:07:34.178172  401591 main.go:141] libmachine: (ha-628553-m02)     <serial type='pty'>
	I1007 12:07:34.178184  401591 main.go:141] libmachine: (ha-628553-m02)       <target port='0'/>
	I1007 12:07:34.178191  401591 main.go:141] libmachine: (ha-628553-m02)     </serial>
	I1007 12:07:34.178201  401591 main.go:141] libmachine: (ha-628553-m02)     <console type='pty'>
	I1007 12:07:34.178212  401591 main.go:141] libmachine: (ha-628553-m02)       <target type='serial' port='0'/>
	I1007 12:07:34.178223  401591 main.go:141] libmachine: (ha-628553-m02)     </console>
	I1007 12:07:34.178233  401591 main.go:141] libmachine: (ha-628553-m02)     <rng model='virtio'>
	I1007 12:07:34.178266  401591 main.go:141] libmachine: (ha-628553-m02)       <backend model='random'>/dev/random</backend>
	I1007 12:07:34.178292  401591 main.go:141] libmachine: (ha-628553-m02)     </rng>
	I1007 12:07:34.178303  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.178316  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.178324  401591 main.go:141] libmachine: (ha-628553-m02)   </devices>
	I1007 12:07:34.178331  401591 main.go:141] libmachine: (ha-628553-m02) </domain>
	I1007 12:07:34.178342  401591 main.go:141] libmachine: (ha-628553-m02) 
	I1007 12:07:34.185967  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:33:2a:81 in network default
	I1007 12:07:34.186520  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring networks are active...
	I1007 12:07:34.186550  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:34.187255  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring network default is active
	I1007 12:07:34.187562  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring network mk-ha-628553 is active
	I1007 12:07:34.187923  401591 main.go:141] libmachine: (ha-628553-m02) Getting domain xml...
	I1007 12:07:34.188741  401591 main.go:141] libmachine: (ha-628553-m02) Creating domain...
	I1007 12:07:35.460306  401591 main.go:141] libmachine: (ha-628553-m02) Waiting to get IP...
	I1007 12:07:35.461270  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.461715  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.461750  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.461693  401944 retry.go:31] will retry after 211.598538ms: waiting for machine to come up
	I1007 12:07:35.675347  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.675895  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.675927  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.675805  401944 retry.go:31] will retry after 296.849ms: waiting for machine to come up
	I1007 12:07:35.974395  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.974893  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.974954  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.974854  401944 retry.go:31] will retry after 388.404149ms: waiting for machine to come up
	I1007 12:07:36.365448  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:36.366155  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:36.366184  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:36.366075  401944 retry.go:31] will retry after 534.318698ms: waiting for machine to come up
	I1007 12:07:36.901907  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:36.902475  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:36.902512  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:36.902413  401944 retry.go:31] will retry after 649.263788ms: waiting for machine to come up
	I1007 12:07:37.553345  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:37.553872  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:37.553898  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:37.553792  401944 retry.go:31] will retry after 939.159086ms: waiting for machine to come up
	I1007 12:07:38.495133  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:38.495757  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:38.495785  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:38.495703  401944 retry.go:31] will retry after 913.128072ms: waiting for machine to come up
	I1007 12:07:39.410208  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:39.410778  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:39.410847  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:39.410734  401944 retry.go:31] will retry after 1.275296837s: waiting for machine to come up
	I1007 12:07:40.688215  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:40.688737  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:40.688763  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:40.688692  401944 retry.go:31] will retry after 1.706568868s: waiting for machine to come up
	I1007 12:07:42.397331  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:42.398210  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:42.398242  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:42.398140  401944 retry.go:31] will retry after 2.035219193s: waiting for machine to come up
	I1007 12:07:44.435063  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:44.435558  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:44.435604  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:44.435541  401944 retry.go:31] will retry after 2.129313504s: waiting for machine to come up
	I1007 12:07:46.567866  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:46.568337  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:46.568363  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:46.568294  401944 retry.go:31] will retry after 2.900138556s: waiting for machine to come up
	I1007 12:07:49.470446  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:49.470835  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:49.470861  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:49.470787  401944 retry.go:31] will retry after 2.802723119s: waiting for machine to come up
	I1007 12:07:52.276755  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:52.277120  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:52.277151  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:52.277100  401944 retry.go:31] will retry after 4.815030442s: waiting for machine to come up
	I1007 12:07:57.095944  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.096384  401591 main.go:141] libmachine: (ha-628553-m02) Found IP for machine: 192.168.39.169
	I1007 12:07:57.096411  401591 main.go:141] libmachine: (ha-628553-m02) Reserving static IP address...
	I1007 12:07:57.096424  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has current primary IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.096805  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find host DHCP lease matching {name: "ha-628553-m02", mac: "52:54:00:59:4a:2e", ip: "192.168.39.169"} in network mk-ha-628553
	I1007 12:07:57.173671  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Getting to WaitForSSH function...
	I1007 12:07:57.173707  401591 main.go:141] libmachine: (ha-628553-m02) Reserved static IP address: 192.168.39.169
	I1007 12:07:57.173721  401591 main.go:141] libmachine: (ha-628553-m02) Waiting for SSH to be available...
	I1007 12:07:57.176077  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.176414  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.176448  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.176591  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH client type: external
	I1007 12:07:57.176618  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa (-rw-------)
	I1007 12:07:57.176654  401591 main.go:141] libmachine: (ha-628553-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:07:57.176671  401591 main.go:141] libmachine: (ha-628553-m02) DBG | About to run SSH command:
	I1007 12:07:57.176683  401591 main.go:141] libmachine: (ha-628553-m02) DBG | exit 0
	I1007 12:07:57.299343  401591 main.go:141] libmachine: (ha-628553-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 12:07:57.299606  401591 main.go:141] libmachine: (ha-628553-m02) KVM machine creation complete!
	I1007 12:07:57.299951  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:57.300520  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:57.300733  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:57.300899  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:07:57.300909  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetState
	I1007 12:07:57.302247  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:07:57.302263  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:07:57.302270  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:07:57.302277  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.304689  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.305046  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.305083  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.305220  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.305416  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.305566  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.305687  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.305859  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.306075  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.306087  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:07:57.402628  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:57.402652  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:07:57.402660  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.405841  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.406213  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.406245  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.406443  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.406658  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.406871  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.407020  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.407143  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.407310  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.407320  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:07:57.503882  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:07:57.503964  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:07:57.503972  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:07:57.503980  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.504231  401591 buildroot.go:166] provisioning hostname "ha-628553-m02"
	I1007 12:07:57.504259  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.504487  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.507249  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.507577  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.507606  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.507742  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.507923  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.508054  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.508176  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.508480  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.508681  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.508694  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m02 && echo "ha-628553-m02" | sudo tee /etc/hostname
	I1007 12:07:57.622198  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m02
	
	I1007 12:07:57.622239  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.625084  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.625439  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.625478  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.625644  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.625837  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.626007  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.626130  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.626308  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.626503  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.626525  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:07:57.732566  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:57.732598  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:07:57.732622  401591 buildroot.go:174] setting up certificates
	I1007 12:07:57.732636  401591 provision.go:84] configureAuth start
	I1007 12:07:57.732649  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.732948  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:57.735493  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.735786  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.735817  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.735963  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.737975  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.738293  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.738318  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.738455  401591 provision.go:143] copyHostCerts
	I1007 12:07:57.738486  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:57.738525  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:07:57.738541  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:57.738610  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:07:57.738684  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:57.738703  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:07:57.738710  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:57.738733  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:07:57.738777  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:57.738793  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:07:57.738800  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:57.738820  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:07:57.738866  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m02 san=[127.0.0.1 192.168.39.169 ha-628553-m02 localhost minikube]
	I1007 12:07:58.143814  401591 provision.go:177] copyRemoteCerts
	I1007 12:07:58.143882  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:07:58.143910  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.147250  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.147700  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.147742  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.147869  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.148081  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.148224  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.148327  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.230179  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:07:58.230271  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:07:58.258288  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:07:58.258382  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:07:58.285135  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:07:58.285208  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:07:58.312621  401591 provision.go:87] duration metric: took 579.970325ms to configureAuth
	I1007 12:07:58.312652  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:07:58.312828  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:58.312907  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.315586  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.315959  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.315990  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.316222  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.316422  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.316601  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.316743  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.316927  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:58.317142  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:58.317161  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:07:58.545249  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:07:58.545278  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:07:58.545290  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetURL
	I1007 12:07:58.546702  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using libvirt version 6000000
	I1007 12:07:58.548842  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.549284  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.549317  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.549407  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:07:58.549418  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:07:58.549424  401591 client.go:171] duration metric: took 24.752608877s to LocalClient.Create
	I1007 12:07:58.549459  401591 start.go:167] duration metric: took 24.752691243s to libmachine.API.Create "ha-628553"
	I1007 12:07:58.549474  401591 start.go:293] postStartSetup for "ha-628553-m02" (driver="kvm2")
	I1007 12:07:58.549489  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:07:58.549507  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.549760  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:07:58.549786  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.551787  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.552071  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.552105  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.552239  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.552437  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.552667  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.552832  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.629949  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:07:58.634600  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:07:58.634633  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:07:58.634716  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:07:58.634820  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:07:58.634833  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:07:58.634948  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:07:58.644927  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:58.670613  401591 start.go:296] duration metric: took 121.120015ms for postStartSetup
	I1007 12:07:58.670687  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:58.671316  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:58.673738  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.674117  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.674143  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.674429  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:58.674687  401591 start.go:128] duration metric: took 24.897586771s to createHost
	I1007 12:07:58.674717  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.676881  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.677232  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.677261  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.677369  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.677545  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.677717  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.677844  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.677997  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:58.678177  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:58.678188  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:07:58.776120  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302878.748851389
	
	I1007 12:07:58.776147  401591 fix.go:216] guest clock: 1728302878.748851389
	I1007 12:07:58.776158  401591 fix.go:229] Guest: 2024-10-07 12:07:58.748851389 +0000 UTC Remote: 2024-10-07 12:07:58.674704612 +0000 UTC m=+72.466738357 (delta=74.146777ms)
	I1007 12:07:58.776181  401591 fix.go:200] guest clock delta is within tolerance: 74.146777ms
	I1007 12:07:58.776187  401591 start.go:83] releasing machines lock for "ha-628553-m02", held for 24.999226116s
	I1007 12:07:58.776211  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.776496  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:58.779145  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.779528  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.779560  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.782069  401591 out.go:177] * Found network options:
	I1007 12:07:58.783459  401591 out.go:177]   - NO_PROXY=192.168.39.110
	W1007 12:07:58.784861  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:07:58.784899  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785569  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785759  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785866  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:07:58.785905  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	W1007 12:07:58.785978  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:07:58.786070  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:07:58.786094  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.788699  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.788936  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789075  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.789100  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789286  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.789381  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.789402  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789444  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.789536  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.789631  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.789706  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.789783  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.789824  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.789925  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:59.016879  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:07:59.023633  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:07:59.023710  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:07:59.041152  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:07:59.041183  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:07:59.041268  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:07:59.058168  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:07:59.074089  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:07:59.074153  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:07:59.089704  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:07:59.104808  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:07:59.234539  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:07:59.391501  401591 docker.go:233] disabling docker service ...
	I1007 12:07:59.391564  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:07:59.406313  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:07:59.420588  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:07:59.553910  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:07:59.664194  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:07:59.679241  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:07:59.699517  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:07:59.699594  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.710670  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:07:59.710739  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.721864  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.733897  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.746035  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:07:59.757811  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.769881  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.789700  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.800942  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:07:59.811016  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:07:59.811084  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:07:59.827337  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:07:59.838316  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:59.964123  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:08:00.067227  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:08:00.067310  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:08:00.073044  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:08:00.073120  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:08:00.077800  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:08:00.127300  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:08:00.127397  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:08:00.156941  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:08:00.190072  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:08:00.191853  401591 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:08:00.193177  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:08:00.196263  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:08:00.196746  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:08:00.196779  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:08:00.196928  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:08:00.201903  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:00.215603  401591 mustload.go:65] Loading cluster: ha-628553
	I1007 12:08:00.215803  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:00.216063  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:00.216108  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:00.231500  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I1007 12:08:00.231984  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:00.232515  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:00.232538  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:00.232906  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:00.233117  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:08:00.234754  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:08:00.235153  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:00.235205  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:00.251119  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I1007 12:08:00.251713  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:00.252244  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:00.252269  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:00.252599  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:00.252779  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:08:00.252870  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.169
	I1007 12:08:00.252879  401591 certs.go:194] generating shared ca certs ...
	I1007 12:08:00.252902  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.253042  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:08:00.253085  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:08:00.253095  401591 certs.go:256] generating profile certs ...
	I1007 12:08:00.253179  401591 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:08:00.253210  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7
	I1007 12:08:00.253235  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.254]
	I1007 12:08:00.386276  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 ...
	I1007 12:08:00.386312  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7: {Name:mk3203e0eda21b3db6f2dd0a690d84683948f867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.386525  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7 ...
	I1007 12:08:00.386553  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7: {Name:mkfc3d62b17b51155465b7666879f42f7347e54c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.386666  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:08:00.386851  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:08:00.387056  401591 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:08:00.387074  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:08:00.387092  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:08:00.387112  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:08:00.387134  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:08:00.387151  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:08:00.387168  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:08:00.387184  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:08:00.387203  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:08:00.387277  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:08:00.387324  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:08:00.387338  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:08:00.387372  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:08:00.387402  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:08:00.387436  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:08:00.387492  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:08:00.387532  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:08:00.387560  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:08:00.387578  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:00.387630  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:08:00.391299  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:00.391779  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:08:00.391810  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:00.392002  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:08:00.392226  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:08:00.392412  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:08:00.392620  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:08:00.467476  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:08:00.476301  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:08:00.489016  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:08:00.494136  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:08:00.509194  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:08:00.513966  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:08:00.525972  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:08:00.530730  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:08:00.543099  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:08:00.548533  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:08:00.560887  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:08:00.565537  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:08:00.578649  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:08:00.607063  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:08:00.634228  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:08:00.660702  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:08:00.687010  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 12:08:00.713721  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:08:00.740934  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:08:00.768133  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:08:00.794572  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:08:00.820864  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:08:00.847539  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:08:00.876441  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:08:00.895435  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:08:00.913785  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:08:00.932908  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:08:00.951947  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:08:00.969974  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:08:00.988515  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:08:01.007600  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:08:01.014010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:08:01.025708  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.030507  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.030585  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.037094  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:08:01.049368  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:08:01.062454  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.067451  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.067538  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.073743  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:08:01.085386  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:08:01.096871  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.102352  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.102441  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.108559  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:08:01.120791  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:08:01.125796  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:08:01.125854  401591 kubeadm.go:934] updating node {m02 192.168.39.169 8443 v1.31.1 crio true true} ...
	I1007 12:08:01.125945  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:08:01.125972  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:08:01.126011  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:08:01.142927  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:08:01.143035  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:08:01.143100  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:08:01.154825  401591 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:08:01.154901  401591 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:08:01.166246  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:08:01.166280  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:08:01.166330  401591 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 12:08:01.166350  401591 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 12:08:01.166352  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:08:01.171889  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:08:01.171923  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:08:01.865609  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:08:01.865701  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:08:01.871954  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:08:01.872006  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:08:01.960218  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:08:02.002318  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:08:02.002440  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:08:02.020653  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:08:02.020697  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:08:02.500270  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:08:02.510702  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:08:02.529075  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:08:02.546750  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:08:02.565165  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:08:02.569362  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:02.582612  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:02.707124  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:02.725325  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:08:02.725700  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:02.725750  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:02.741913  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I1007 12:08:02.742441  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:02.742930  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:02.742953  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:02.743338  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:02.743547  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:08:02.743717  401591 start.go:317] joinCluster: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:08:02.743844  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:08:02.743869  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:08:02.747217  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:02.747665  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:08:02.747694  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:02.747872  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:08:02.748048  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:08:02.748193  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:08:02.748311  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:08:02.893504  401591 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:02.893569  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xsg4ou.msqa1mnarg4j4fst --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m02 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443"
	I1007 12:08:24.411215  401591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xsg4ou.msqa1mnarg4j4fst --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m02 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443": (21.517602331s)
	I1007 12:08:24.411250  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:08:24.991460  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553-m02 minikube.k8s.io/updated_at=2024_10_07T12_08_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=false
	I1007 12:08:25.149659  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-628553-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:08:25.289097  401591 start.go:319] duration metric: took 22.545377397s to joinCluster
	I1007 12:08:25.289200  401591 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:25.289529  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:25.291070  401591 out.go:177] * Verifying Kubernetes components...
	I1007 12:08:25.292571  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:25.564988  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:25.614504  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:08:25.614869  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:08:25.614979  401591 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:08:25.615327  401591 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:08:25.615461  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:25.615476  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:25.615490  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:25.615502  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:25.627711  401591 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 12:08:26.115662  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:26.115688  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:26.115696  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:26.115700  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:26.119790  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:26.615649  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:26.615673  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:26.615681  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:26.615685  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:26.619911  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.115994  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:27.116020  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:27.116029  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:27.116032  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:27.120154  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.616200  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:27.616222  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:27.616230  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:27.616234  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:27.620627  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.621267  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:28.116293  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:28.116321  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:28.116331  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:28.116337  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:28.121199  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:28.616216  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:28.616252  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:28.616260  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:28.616275  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:28.624618  401591 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:08:29.116125  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:29.116148  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:29.116156  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:29.116161  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:29.143192  401591 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1007 12:08:29.616218  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:29.616252  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:29.616260  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:29.616263  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:29.621645  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:29.622758  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:30.116377  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:30.116414  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:30.116434  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:30.116442  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:30.120276  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:30.616264  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:30.616289  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:30.616298  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:30.616302  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:30.619656  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:31.115662  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:31.115686  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:31.115695  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:31.115698  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:31.120037  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:31.616077  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:31.616103  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:31.616112  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:31.616119  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.027207  401591 round_trippers.go:574] Response Status: 200 OK in 411 milliseconds
	I1007 12:08:32.028035  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:32.116023  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:32.116049  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:32.116061  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:32.116066  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.123800  401591 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:08:32.615910  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:32.615936  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:32.615945  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:32.615949  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.619848  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:33.115622  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:33.115645  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:33.115652  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:33.115657  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:33.119744  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:33.616336  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:33.616363  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:33.616372  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:33.616378  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:33.620139  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:34.116322  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:34.116357  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:34.116368  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:34.116374  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:34.119958  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:34.120614  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:34.615645  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:34.615672  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:34.615682  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:34.615687  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:34.619017  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:35.115922  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:35.115951  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:35.115965  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:35.115969  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:35.119735  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:35.615551  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:35.615578  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:35.615589  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:35.615595  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:35.619854  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:36.115806  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:36.115830  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:36.115839  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:36.115842  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:36.119509  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:36.616590  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:36.616626  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:36.616638  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:36.616646  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:36.620711  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:36.621977  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:37.116201  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:37.116229  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:37.116237  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:37.116241  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:37.119861  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:37.615763  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:37.615789  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:37.615798  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:37.615801  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:37.619542  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:38.116230  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:38.116254  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:38.116262  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:38.116266  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:38.119599  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:38.616300  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:38.616327  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:38.616336  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:38.616340  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:38.622637  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:08:38.623148  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:39.116056  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:39.116089  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:39.116102  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:39.116108  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:39.119313  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:39.615634  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:39.615660  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:39.615668  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:39.615672  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:39.619449  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:40.116288  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:40.116318  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:40.116330  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:40.116337  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:40.120596  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:40.615608  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:40.615636  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:40.615645  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:40.615650  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:40.619654  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:41.115684  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:41.115712  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:41.115723  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:41.115729  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:41.119362  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:41.119941  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:41.616052  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:41.616080  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:41.616092  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:41.616099  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:41.621355  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:42.116153  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:42.116179  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:42.116190  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:42.116195  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:42.119158  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:42.615813  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:42.615838  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:42.615849  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:42.615856  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:42.619479  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.116150  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.116183  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.116193  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.116197  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.119726  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.120412  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:43.615803  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.615825  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.615833  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.615837  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.619282  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.619820  401591 node_ready.go:49] node "ha-628553-m02" has status "Ready":"True"
	I1007 12:08:43.619840  401591 node_ready.go:38] duration metric: took 18.00448517s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:08:43.619850  401591 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:43.619942  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:43.619953  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.619962  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.619968  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.625430  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:43.631358  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.631464  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:08:43.631473  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.631481  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.631485  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.634796  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.635822  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.635842  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.635852  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.635858  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.638589  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.639211  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.639241  401591 pod_ready.go:82] duration metric: took 7.850216ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.639256  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.639336  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:08:43.639349  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.639360  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.639367  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.642168  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.642861  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.642879  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.642885  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.642891  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.645645  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.646131  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.646152  401591 pod_ready.go:82] duration metric: took 6.888201ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.646164  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.646225  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:08:43.646233  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.646240  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.646244  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.649034  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.649700  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.649718  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.649726  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.649731  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.652932  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.653474  401591 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.653494  401591 pod_ready.go:82] duration metric: took 7.324392ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.653506  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.653570  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:08:43.653578  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.653585  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.653589  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.656625  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.657314  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.657332  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.657340  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.657344  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.659929  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.660411  401591 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.660431  401591 pod_ready.go:82] duration metric: took 6.918652ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.660446  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.816876  401591 request.go:632] Waited for 156.326759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:08:43.816939  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:08:43.816943  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.816951  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.816956  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.820806  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.015988  401591 request.go:632] Waited for 194.312012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.016073  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.016081  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.016091  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.016121  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.019609  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.020136  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.020158  401591 pod_ready.go:82] duration metric: took 359.705878ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.020169  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.216359  401591 request.go:632] Waited for 196.109348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:08:44.216441  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:08:44.216449  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.216460  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.216468  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.222633  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:08:44.416891  401591 request.go:632] Waited for 193.411987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:44.416975  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:44.416983  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.416993  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.416999  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.420954  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.421562  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.421582  401591 pod_ready.go:82] duration metric: took 401.406583ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.421592  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.616625  401591 request.go:632] Waited for 194.940502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:08:44.616688  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:08:44.616693  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.616701  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.616707  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.620706  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.815865  401591 request.go:632] Waited for 194.348456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.815947  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.815954  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.815966  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.815972  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.819923  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.820749  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.820767  401591 pod_ready.go:82] duration metric: took 399.169132ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.820778  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.015880  401591 request.go:632] Waited for 195.028084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:08:45.015978  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:08:45.015983  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.015991  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.015997  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.020421  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.216616  401591 request.go:632] Waited for 195.391964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:45.216689  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:45.216696  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.216707  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.216712  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.221024  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.221697  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:45.221728  401591 pod_ready.go:82] duration metric: took 400.942386ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.221743  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.416754  401591 request.go:632] Waited for 194.909444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:08:45.416821  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:08:45.416834  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.416842  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.416848  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.421020  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.616294  401591 request.go:632] Waited for 194.468244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:45.616378  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:45.616387  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.616399  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.616406  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.620542  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.621474  401591 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:45.621500  401591 pod_ready.go:82] duration metric: took 399.748616ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.621515  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.816631  401591 request.go:632] Waited for 195.03231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:08:45.816699  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:08:45.816705  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.816713  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.816718  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.820607  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:46.016805  401591 request.go:632] Waited for 195.41966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.016911  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.016918  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.016926  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.016930  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.021351  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.021889  401591 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.021914  401591 pod_ready.go:82] duration metric: took 400.391171ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.021926  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.215992  401591 request.go:632] Waited for 193.955382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:08:46.216085  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:08:46.216092  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.216102  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.216108  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.219547  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:46.416084  401591 request.go:632] Waited for 195.950012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:46.416159  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:46.416167  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.416179  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.416198  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.420356  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.420972  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.420993  401591 pod_ready.go:82] duration metric: took 399.057557ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.421005  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.616254  401591 request.go:632] Waited for 195.135703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:08:46.616343  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:08:46.616355  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.616366  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.616375  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.625428  401591 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:08:46.816391  401591 request.go:632] Waited for 190.390972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.816468  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.816473  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.816482  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.816488  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.820601  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.821110  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.821133  401591 pod_ready.go:82] duration metric: took 400.121331ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.821145  401591 pod_ready.go:39] duration metric: took 3.201283112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:46.821161  401591 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:08:46.821222  401591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:08:46.839291  401591 api_server.go:72] duration metric: took 21.550041864s to wait for apiserver process to appear ...
	I1007 12:08:46.839326  401591 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:08:46.839354  401591 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:08:46.845263  401591 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:08:46.845352  401591 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:08:46.845360  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.845369  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.845373  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.846772  401591 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1007 12:08:46.846883  401591 api_server.go:141] control plane version: v1.31.1
	I1007 12:08:46.846902  401591 api_server.go:131] duration metric: took 7.569264ms to wait for apiserver health ...
	I1007 12:08:46.846910  401591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:08:47.016224  401591 request.go:632] Waited for 169.208213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.016315  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.016324  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.016337  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.016348  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.021945  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:47.026191  401591 system_pods.go:59] 17 kube-system pods found
	I1007 12:08:47.026232  401591 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:08:47.026238  401591 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:08:47.026242  401591 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:08:47.026246  401591 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:08:47.026251  401591 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:08:47.026255  401591 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:08:47.026260  401591 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:08:47.026264  401591 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:08:47.026268  401591 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:08:47.026273  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:08:47.026276  401591 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:08:47.026279  401591 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:08:47.026282  401591 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:08:47.026285  401591 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:08:47.026288  401591 system_pods.go:61] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:08:47.026291  401591 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:08:47.026294  401591 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:08:47.026300  401591 system_pods.go:74] duration metric: took 179.385599ms to wait for pod list to return data ...
	I1007 12:08:47.026311  401591 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:08:47.216777  401591 request.go:632] Waited for 190.349118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:08:47.216844  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:08:47.216851  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.216861  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.216867  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.220501  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:47.220765  401591 default_sa.go:45] found service account: "default"
	I1007 12:08:47.220790  401591 default_sa.go:55] duration metric: took 194.471685ms for default service account to be created ...
	I1007 12:08:47.220803  401591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:08:47.416131  401591 request.go:632] Waited for 195.245207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.416207  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.416215  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.416224  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.416238  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.422085  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:47.426776  401591 system_pods.go:86] 17 kube-system pods found
	I1007 12:08:47.426812  401591 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:08:47.426820  401591 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:08:47.426826  401591 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:08:47.426832  401591 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:08:47.426837  401591 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:08:47.426842  401591 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:08:47.426848  401591 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:08:47.426853  401591 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:08:47.426858  401591 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:08:47.426863  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:08:47.426868  401591 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:08:47.426873  401591 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:08:47.426881  401591 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:08:47.426887  401591 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:08:47.426892  401591 system_pods.go:89] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:08:47.426898  401591 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:08:47.426907  401591 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:08:47.426918  401591 system_pods.go:126] duration metric: took 206.105758ms to wait for k8s-apps to be running ...
	I1007 12:08:47.426931  401591 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:08:47.427006  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:08:47.444273  401591 system_svc.go:56] duration metric: took 17.328443ms WaitForService to wait for kubelet
	I1007 12:08:47.444313  401591 kubeadm.go:582] duration metric: took 22.155070744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:08:47.444339  401591 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:08:47.616864  401591 request.go:632] Waited for 172.422315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:08:47.616938  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:08:47.616945  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.616961  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.616969  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.621972  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:47.622888  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:08:47.622919  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:08:47.622945  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:08:47.622950  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:08:47.622955  401591 node_conditions.go:105] duration metric: took 178.610758ms to run NodePressure ...
	I1007 12:08:47.622983  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:08:47.623014  401591 start.go:255] writing updated cluster config ...
	I1007 12:08:47.625468  401591 out.go:201] 
	I1007 12:08:47.627200  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:47.627328  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:08:47.629319  401591 out.go:177] * Starting "ha-628553-m03" control-plane node in "ha-628553" cluster
	I1007 12:08:47.630767  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:08:47.630807  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:08:47.630955  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:08:47.630986  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:08:47.631145  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:08:47.631383  401591 start.go:360] acquireMachinesLock for ha-628553-m03: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:08:47.631439  401591 start.go:364] duration metric: took 32.151µs to acquireMachinesLock for "ha-628553-m03"
	I1007 12:08:47.631463  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:47.631573  401591 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 12:08:47.633396  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:08:47.633527  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:47.633570  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:47.650117  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I1007 12:08:47.650636  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:47.651158  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:47.651181  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:47.651622  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:47.651783  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:08:47.651941  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:08:47.652092  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:08:47.652123  401591 client.go:168] LocalClient.Create starting
	I1007 12:08:47.652165  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:08:47.652208  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:08:47.652231  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:08:47.652328  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:08:47.652361  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:08:47.652377  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:08:47.652400  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:08:47.652412  401591 main.go:141] libmachine: (ha-628553-m03) Calling .PreCreateCheck
	I1007 12:08:47.652572  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:08:47.652989  401591 main.go:141] libmachine: Creating machine...
	I1007 12:08:47.653006  401591 main.go:141] libmachine: (ha-628553-m03) Calling .Create
	I1007 12:08:47.653161  401591 main.go:141] libmachine: (ha-628553-m03) Creating KVM machine...
	I1007 12:08:47.654461  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found existing default KVM network
	I1007 12:08:47.654504  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found existing private KVM network mk-ha-628553
	I1007 12:08:47.654721  401591 main.go:141] libmachine: (ha-628553-m03) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 ...
	I1007 12:08:47.654751  401591 main.go:141] libmachine: (ha-628553-m03) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:08:47.654817  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:47.654705  402350 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:08:47.654927  401591 main.go:141] libmachine: (ha-628553-m03) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:08:47.943561  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:47.943397  402350 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa...
	I1007 12:08:48.157872  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:48.157710  402350 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/ha-628553-m03.rawdisk...
	I1007 12:08:48.157916  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Writing magic tar header
	I1007 12:08:48.157932  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Writing SSH key tar header
	I1007 12:08:48.157944  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:48.157825  402350 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 ...
	I1007 12:08:48.157970  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03
	I1007 12:08:48.158063  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 (perms=drwx------)
	I1007 12:08:48.158107  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:08:48.158121  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:08:48.158141  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:08:48.158150  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:08:48.158232  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:08:48.158257  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:08:48.158266  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:08:48.158280  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:08:48.158289  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:08:48.158307  401591 main.go:141] libmachine: (ha-628553-m03) Creating domain...
	I1007 12:08:48.158321  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:08:48.158335  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home
	I1007 12:08:48.158350  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Skipping /home - not owner
	I1007 12:08:48.159295  401591 main.go:141] libmachine: (ha-628553-m03) define libvirt domain using xml: 
	I1007 12:08:48.159314  401591 main.go:141] libmachine: (ha-628553-m03) <domain type='kvm'>
	I1007 12:08:48.159321  401591 main.go:141] libmachine: (ha-628553-m03)   <name>ha-628553-m03</name>
	I1007 12:08:48.159327  401591 main.go:141] libmachine: (ha-628553-m03)   <memory unit='MiB'>2200</memory>
	I1007 12:08:48.159361  401591 main.go:141] libmachine: (ha-628553-m03)   <vcpu>2</vcpu>
	I1007 12:08:48.159380  401591 main.go:141] libmachine: (ha-628553-m03)   <features>
	I1007 12:08:48.159389  401591 main.go:141] libmachine: (ha-628553-m03)     <acpi/>
	I1007 12:08:48.159398  401591 main.go:141] libmachine: (ha-628553-m03)     <apic/>
	I1007 12:08:48.159406  401591 main.go:141] libmachine: (ha-628553-m03)     <pae/>
	I1007 12:08:48.159416  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159423  401591 main.go:141] libmachine: (ha-628553-m03)   </features>
	I1007 12:08:48.159430  401591 main.go:141] libmachine: (ha-628553-m03)   <cpu mode='host-passthrough'>
	I1007 12:08:48.159437  401591 main.go:141] libmachine: (ha-628553-m03)   
	I1007 12:08:48.159446  401591 main.go:141] libmachine: (ha-628553-m03)   </cpu>
	I1007 12:08:48.159455  401591 main.go:141] libmachine: (ha-628553-m03)   <os>
	I1007 12:08:48.159465  401591 main.go:141] libmachine: (ha-628553-m03)     <type>hvm</type>
	I1007 12:08:48.159477  401591 main.go:141] libmachine: (ha-628553-m03)     <boot dev='cdrom'/>
	I1007 12:08:48.159488  401591 main.go:141] libmachine: (ha-628553-m03)     <boot dev='hd'/>
	I1007 12:08:48.159499  401591 main.go:141] libmachine: (ha-628553-m03)     <bootmenu enable='no'/>
	I1007 12:08:48.159508  401591 main.go:141] libmachine: (ha-628553-m03)   </os>
	I1007 12:08:48.159518  401591 main.go:141] libmachine: (ha-628553-m03)   <devices>
	I1007 12:08:48.159527  401591 main.go:141] libmachine: (ha-628553-m03)     <disk type='file' device='cdrom'>
	I1007 12:08:48.159543  401591 main.go:141] libmachine: (ha-628553-m03)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/boot2docker.iso'/>
	I1007 12:08:48.159554  401591 main.go:141] libmachine: (ha-628553-m03)       <target dev='hdc' bus='scsi'/>
	I1007 12:08:48.159561  401591 main.go:141] libmachine: (ha-628553-m03)       <readonly/>
	I1007 12:08:48.159571  401591 main.go:141] libmachine: (ha-628553-m03)     </disk>
	I1007 12:08:48.159579  401591 main.go:141] libmachine: (ha-628553-m03)     <disk type='file' device='disk'>
	I1007 12:08:48.159596  401591 main.go:141] libmachine: (ha-628553-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:08:48.159611  401591 main.go:141] libmachine: (ha-628553-m03)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/ha-628553-m03.rawdisk'/>
	I1007 12:08:48.159621  401591 main.go:141] libmachine: (ha-628553-m03)       <target dev='hda' bus='virtio'/>
	I1007 12:08:48.159629  401591 main.go:141] libmachine: (ha-628553-m03)     </disk>
	I1007 12:08:48.159639  401591 main.go:141] libmachine: (ha-628553-m03)     <interface type='network'>
	I1007 12:08:48.159647  401591 main.go:141] libmachine: (ha-628553-m03)       <source network='mk-ha-628553'/>
	I1007 12:08:48.159659  401591 main.go:141] libmachine: (ha-628553-m03)       <model type='virtio'/>
	I1007 12:08:48.159667  401591 main.go:141] libmachine: (ha-628553-m03)     </interface>
	I1007 12:08:48.159677  401591 main.go:141] libmachine: (ha-628553-m03)     <interface type='network'>
	I1007 12:08:48.159685  401591 main.go:141] libmachine: (ha-628553-m03)       <source network='default'/>
	I1007 12:08:48.159695  401591 main.go:141] libmachine: (ha-628553-m03)       <model type='virtio'/>
	I1007 12:08:48.159702  401591 main.go:141] libmachine: (ha-628553-m03)     </interface>
	I1007 12:08:48.159711  401591 main.go:141] libmachine: (ha-628553-m03)     <serial type='pty'>
	I1007 12:08:48.159722  401591 main.go:141] libmachine: (ha-628553-m03)       <target port='0'/>
	I1007 12:08:48.159732  401591 main.go:141] libmachine: (ha-628553-m03)     </serial>
	I1007 12:08:48.159741  401591 main.go:141] libmachine: (ha-628553-m03)     <console type='pty'>
	I1007 12:08:48.159751  401591 main.go:141] libmachine: (ha-628553-m03)       <target type='serial' port='0'/>
	I1007 12:08:48.159759  401591 main.go:141] libmachine: (ha-628553-m03)     </console>
	I1007 12:08:48.159769  401591 main.go:141] libmachine: (ha-628553-m03)     <rng model='virtio'>
	I1007 12:08:48.159779  401591 main.go:141] libmachine: (ha-628553-m03)       <backend model='random'>/dev/random</backend>
	I1007 12:08:48.159786  401591 main.go:141] libmachine: (ha-628553-m03)     </rng>
	I1007 12:08:48.159791  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159796  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159801  401591 main.go:141] libmachine: (ha-628553-m03)   </devices>
	I1007 12:08:48.159807  401591 main.go:141] libmachine: (ha-628553-m03) </domain>
	I1007 12:08:48.159814  401591 main.go:141] libmachine: (ha-628553-m03) 
	I1007 12:08:48.167454  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:19:9b:6c in network default
	I1007 12:08:48.168104  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring networks are active...
	I1007 12:08:48.168135  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:48.168903  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring network default is active
	I1007 12:08:48.169240  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring network mk-ha-628553 is active
	I1007 12:08:48.169699  401591 main.go:141] libmachine: (ha-628553-m03) Getting domain xml...
	I1007 12:08:48.170532  401591 main.go:141] libmachine: (ha-628553-m03) Creating domain...
	I1007 12:08:49.440366  401591 main.go:141] libmachine: (ha-628553-m03) Waiting to get IP...
	I1007 12:08:49.441248  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:49.441739  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:49.441772  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:49.441711  402350 retry.go:31] will retry after 304.052486ms: waiting for machine to come up
	I1007 12:08:49.747277  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:49.747963  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:49.747996  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:49.747904  402350 retry.go:31] will retry after 363.120796ms: waiting for machine to come up
	I1007 12:08:50.113364  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.113854  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.113886  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.113784  402350 retry.go:31] will retry after 318.214065ms: waiting for machine to come up
	I1007 12:08:50.434117  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.434742  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.434772  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.434669  402350 retry.go:31] will retry after 557.05591ms: waiting for machine to come up
	I1007 12:08:50.993368  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.993877  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.993902  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.993839  402350 retry.go:31] will retry after 534.862367ms: waiting for machine to come up
	I1007 12:08:51.530722  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:51.531299  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:51.531330  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:51.531236  402350 retry.go:31] will retry after 674.225428ms: waiting for machine to come up
	I1007 12:08:52.207219  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:52.207779  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:52.207805  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:52.207744  402350 retry.go:31] will retry after 750.38088ms: waiting for machine to come up
	I1007 12:08:52.959912  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:52.960419  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:52.960456  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:52.960375  402350 retry.go:31] will retry after 1.032745665s: waiting for machine to come up
	I1007 12:08:53.994776  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:53.995316  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:53.995345  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:53.995259  402350 retry.go:31] will retry after 1.174624993s: waiting for machine to come up
	I1007 12:08:55.171247  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:55.171687  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:55.171709  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:55.171640  402350 retry.go:31] will retry after 2.315279218s: waiting for machine to come up
	I1007 12:08:57.488351  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:57.488810  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:57.488838  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:57.488771  402350 retry.go:31] will retry after 1.769995019s: waiting for machine to come up
	I1007 12:08:59.260072  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:59.260605  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:59.260637  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:59.260547  402350 retry.go:31] will retry after 3.352254545s: waiting for machine to come up
	I1007 12:09:02.616362  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:02.616828  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:09:02.616850  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:09:02.616780  402350 retry.go:31] will retry after 4.496920566s: waiting for machine to come up
	I1007 12:09:07.118974  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:07.119565  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:09:07.119593  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:09:07.119492  402350 retry.go:31] will retry after 4.132199874s: waiting for machine to come up
	I1007 12:09:11.256196  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.256790  401591 main.go:141] libmachine: (ha-628553-m03) Found IP for machine: 192.168.39.149
	I1007 12:09:11.256824  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has current primary IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.256833  401591 main.go:141] libmachine: (ha-628553-m03) Reserving static IP address...
	I1007 12:09:11.257175  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find host DHCP lease matching {name: "ha-628553-m03", mac: "52:54:00:3c:9f:34", ip: "192.168.39.149"} in network mk-ha-628553
	I1007 12:09:11.338093  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Getting to WaitForSSH function...
	I1007 12:09:11.338124  401591 main.go:141] libmachine: (ha-628553-m03) Reserved static IP address: 192.168.39.149
	I1007 12:09:11.338139  401591 main.go:141] libmachine: (ha-628553-m03) Waiting for SSH to be available...
	I1007 12:09:11.341396  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.341892  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.341925  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.342105  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH client type: external
	I1007 12:09:11.342133  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa (-rw-------)
	I1007 12:09:11.342177  401591 main.go:141] libmachine: (ha-628553-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:09:11.342197  401591 main.go:141] libmachine: (ha-628553-m03) DBG | About to run SSH command:
	I1007 12:09:11.342214  401591 main.go:141] libmachine: (ha-628553-m03) DBG | exit 0
	I1007 12:09:11.471281  401591 main.go:141] libmachine: (ha-628553-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 12:09:11.471621  401591 main.go:141] libmachine: (ha-628553-m03) KVM machine creation complete!
	I1007 12:09:11.471952  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:09:11.472582  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:11.472840  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:11.473024  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:09:11.473037  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetState
	I1007 12:09:11.474527  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:09:11.474548  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:09:11.474555  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:09:11.474563  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.477303  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.477650  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.477666  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.477788  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.477993  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.478174  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.478306  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.478470  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.478702  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.478716  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:09:11.587071  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:09:11.587095  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:09:11.587105  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.589883  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.590265  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.590295  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.590447  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.590647  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.590829  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.591025  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.591169  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.591356  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.591367  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:09:11.704302  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:09:11.704403  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:09:11.704415  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:09:11.704426  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.704723  401591 buildroot.go:166] provisioning hostname "ha-628553-m03"
	I1007 12:09:11.704750  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.704905  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.707646  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.708032  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.708062  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.708204  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.708466  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.708666  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.708795  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.708972  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.709229  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.709247  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m03 && echo "ha-628553-m03" | sudo tee /etc/hostname
	I1007 12:09:11.834437  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m03
	
	I1007 12:09:11.834498  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.837609  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.837983  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.838013  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.838374  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.838612  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.838805  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.839005  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.839175  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.839394  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.839420  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:09:11.962733  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:09:11.962765  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:09:11.962788  401591 buildroot.go:174] setting up certificates
	I1007 12:09:11.962801  401591 provision.go:84] configureAuth start
	I1007 12:09:11.962814  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.963127  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:11.965755  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.966166  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.966201  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.966379  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.968397  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.968678  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.968703  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.968812  401591 provision.go:143] copyHostCerts
	I1007 12:09:11.968847  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:09:11.968897  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:09:11.968910  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:09:11.968994  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:09:11.969133  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:09:11.969163  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:09:11.969173  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:09:11.969222  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:09:11.969301  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:09:11.969326  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:09:11.969332  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:09:11.969367  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:09:11.969444  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m03 san=[127.0.0.1 192.168.39.149 ha-628553-m03 localhost minikube]
	I1007 12:09:12.008085  401591 provision.go:177] copyRemoteCerts
	I1007 12:09:12.008153  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:09:12.008198  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.011020  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.011447  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.011479  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.011639  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.011896  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.012077  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.012241  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.099103  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:09:12.099196  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:09:12.129470  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:09:12.129570  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:09:12.156229  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:09:12.156324  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:09:12.182409  401591 provision.go:87] duration metric: took 219.592268ms to configureAuth
	I1007 12:09:12.182440  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:09:12.182689  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:12.182805  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.186445  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.186906  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.186942  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.187197  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.187409  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.187561  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.187701  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.187919  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:12.188176  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:12.188201  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:09:12.442162  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:09:12.442201  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:09:12.442252  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetURL
	I1007 12:09:12.443642  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using libvirt version 6000000
	I1007 12:09:12.445960  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.446454  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.446484  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.446704  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:09:12.446717  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:09:12.446724  401591 client.go:171] duration metric: took 24.794590297s to LocalClient.Create
	I1007 12:09:12.446748  401591 start.go:167] duration metric: took 24.794658821s to libmachine.API.Create "ha-628553"
	I1007 12:09:12.446758  401591 start.go:293] postStartSetup for "ha-628553-m03" (driver="kvm2")
	I1007 12:09:12.446768  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:09:12.446787  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.447044  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:09:12.447067  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.449182  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.449535  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.449578  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.449689  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.449866  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.450019  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.450128  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.538407  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:09:12.543112  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:09:12.543143  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:09:12.543238  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:09:12.543327  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:09:12.543349  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:09:12.543452  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:09:12.553965  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:09:12.580260  401591 start.go:296] duration metric: took 133.488077ms for postStartSetup
	I1007 12:09:12.580320  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:09:12.580945  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:12.583692  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.584096  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.584119  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.584577  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:09:12.584810  401591 start.go:128] duration metric: took 24.953224798s to createHost
	I1007 12:09:12.584834  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.586899  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.587276  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.587304  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.587460  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.587666  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.587811  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.587989  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.588157  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:12.588403  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:12.588416  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:09:12.699909  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302952.675618146
	
	I1007 12:09:12.699944  401591 fix.go:216] guest clock: 1728302952.675618146
	I1007 12:09:12.699957  401591 fix.go:229] Guest: 2024-10-07 12:09:12.675618146 +0000 UTC Remote: 2024-10-07 12:09:12.584823089 +0000 UTC m=+146.376856843 (delta=90.795057ms)
	I1007 12:09:12.699983  401591 fix.go:200] guest clock delta is within tolerance: 90.795057ms
	I1007 12:09:12.700015  401591 start.go:83] releasing machines lock for "ha-628553-m03", held for 25.068545198s
	I1007 12:09:12.700046  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.700343  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:12.703273  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.703654  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.703685  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.706106  401591 out.go:177] * Found network options:
	I1007 12:09:12.707602  401591 out.go:177]   - NO_PROXY=192.168.39.110,192.168.39.169
	W1007 12:09:12.709074  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:09:12.709105  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:09:12.709125  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.709903  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.710157  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.710281  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:09:12.710326  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	W1007 12:09:12.710331  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:09:12.710350  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:09:12.710418  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:09:12.710435  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.713091  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713270  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713549  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.713577  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713688  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.713709  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713890  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.713892  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.714094  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.714096  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.714290  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.714293  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.714448  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.714465  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.965758  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:09:12.972410  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:09:12.972510  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:09:12.991892  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:09:12.991924  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:09:12.992029  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:09:13.011092  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:09:13.027119  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:09:13.027197  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:09:13.043881  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:09:13.059996  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:09:13.194059  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:09:13.363286  401591 docker.go:233] disabling docker service ...
	I1007 12:09:13.363388  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:09:13.380238  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:09:13.395090  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:09:13.539822  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:09:13.684666  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:09:13.699806  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:09:13.721312  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:09:13.721394  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.734593  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:09:13.734678  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.746652  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.758752  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.770649  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:09:13.783579  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.796044  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.816090  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.829211  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:09:13.841584  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:09:13.841652  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:09:13.858346  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:09:13.870682  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:14.015562  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:09:14.112385  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:09:14.112472  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:09:14.117706  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:09:14.117785  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:09:14.121973  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:09:14.164678  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:09:14.164778  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:09:14.195026  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:09:14.228305  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:09:14.229710  401591 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:09:14.230954  401591 out.go:177]   - env NO_PROXY=192.168.39.110,192.168.39.169
	I1007 12:09:14.232215  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:14.235268  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:14.236414  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:14.236455  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:14.236834  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:09:14.241615  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:09:14.255885  401591 mustload.go:65] Loading cluster: ha-628553
	I1007 12:09:14.256171  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:14.256468  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:14.256525  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:14.272191  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35203
	I1007 12:09:14.272704  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:14.273292  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:14.273317  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:14.273675  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:14.273860  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:09:14.275739  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:09:14.276042  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:14.276078  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:14.291563  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34379
	I1007 12:09:14.291960  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:14.292503  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:14.292525  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:14.292841  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:14.293029  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:09:14.293266  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.149
	I1007 12:09:14.293282  401591 certs.go:194] generating shared ca certs ...
	I1007 12:09:14.293298  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.293454  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:09:14.293500  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:09:14.293518  401591 certs.go:256] generating profile certs ...
	I1007 12:09:14.293595  401591 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:09:14.293624  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5
	I1007 12:09:14.293644  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.149 192.168.39.254]
	I1007 12:09:14.510662  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 ...
	I1007 12:09:14.510698  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5: {Name:mke401c308480be9f53e9bff701f2e9e4cf3af88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.510883  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5 ...
	I1007 12:09:14.510897  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5: {Name:mk6ef257f67983b566726de1c934d8565c12b533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.510988  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:09:14.511123  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:09:14.511263  401591 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:09:14.511281  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:09:14.511294  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:09:14.511306  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:09:14.511318  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:09:14.511328  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:09:14.511341  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:09:14.511350  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:09:14.551130  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:09:14.551306  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:09:14.551354  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:09:14.551363  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:09:14.551385  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:09:14.551414  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:09:14.551453  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:09:14.551518  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:09:14.551570  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:14.551588  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:09:14.551601  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:09:14.551640  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:09:14.554905  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:14.555423  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:09:14.555460  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:14.555653  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:09:14.555879  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:09:14.556052  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:09:14.556195  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:09:14.631352  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:09:14.636908  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:09:14.651074  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:09:14.656279  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:09:14.669909  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:09:14.674787  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:09:14.685770  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:09:14.690694  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:09:14.702721  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:09:14.707691  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:09:14.719165  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:09:14.724048  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:09:14.737169  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:09:14.766716  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:09:14.794736  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:09:14.821693  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:09:14.848771  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:09:14.877403  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:09:14.903816  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:09:14.930704  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:09:14.958763  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:09:14.986639  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:09:15.012198  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:09:15.040552  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:09:15.060843  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:09:15.079624  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:09:15.099559  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:09:15.119015  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:09:15.138902  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:09:15.157844  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:09:15.176996  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:09:15.183306  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:09:15.195832  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.201336  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.201442  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.208010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:09:15.220845  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:09:15.233290  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.238387  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.238463  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.245368  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:09:15.257699  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:09:15.270151  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.274983  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.275048  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.281100  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:09:15.293845  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:09:15.298173  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:09:15.298242  401591 kubeadm.go:934] updating node {m03 192.168.39.149 8443 v1.31.1 crio true true} ...
	I1007 12:09:15.298356  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:09:15.298388  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:09:15.298436  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:09:15.316713  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:09:15.316806  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:09:15.316885  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:09:15.329178  401591 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:09:15.329260  401591 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:09:15.341535  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 12:09:15.341551  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 12:09:15.341569  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:09:15.341576  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:09:15.341585  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:09:15.341597  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:09:15.341641  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:09:15.341660  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:09:15.361141  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:09:15.361169  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:09:15.361188  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:09:15.361231  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:09:15.361273  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:09:15.361282  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:09:15.386048  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:09:15.386094  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:09:16.354010  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:09:16.365447  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:09:16.386247  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:09:16.405656  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:09:16.424160  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:09:16.428897  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:09:16.443784  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:16.576452  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:09:16.595070  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:09:16.595602  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:16.595675  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:16.612706  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40581
	I1007 12:09:16.613341  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:16.613998  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:16.614030  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:16.614425  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:16.614648  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:09:16.614817  401591 start.go:317] joinCluster: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:09:16.615034  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:09:16.615063  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:09:16.618382  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:16.618897  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:09:16.618931  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:16.619128  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:09:16.619318  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:09:16.619512  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:09:16.619676  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:09:16.786244  401591 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:09:16.786300  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lajva.py7n2yqd96dw6gb3 --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m03 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443"
	I1007 12:09:40.133777  401591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lajva.py7n2yqd96dw6gb3 --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m03 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443": (23.347442914s)
	I1007 12:09:40.133833  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:09:40.642262  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553-m03 minikube.k8s.io/updated_at=2024_10_07T12_09_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=false
	I1007 12:09:40.798800  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-628553-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:09:40.938486  401591 start.go:319] duration metric: took 24.323665443s to joinCluster
	I1007 12:09:40.938574  401591 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:09:40.938992  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:40.939839  401591 out.go:177] * Verifying Kubernetes components...
	I1007 12:09:40.941073  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:41.179331  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:09:41.207454  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:09:41.207837  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:09:41.207937  401591 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:09:41.208281  401591 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m03" to be "Ready" ...
	I1007 12:09:41.208393  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:41.208405  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:41.208416  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:41.208425  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:41.212516  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:41.709058  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:41.709088  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:41.709105  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:41.709111  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:41.712889  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:42.209244  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:42.209270  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:42.209282  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:42.209291  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:42.215411  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:42.708822  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:42.708852  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:42.708859  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:42.708864  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:42.712350  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:43.208783  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:43.208814  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:43.208825  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:43.208830  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:43.212641  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:43.213313  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:43.708554  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:43.708586  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:43.708598  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:43.708603  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:43.712869  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:44.209341  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:44.209369  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:44.209378  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:44.209383  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:44.213843  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:44.708627  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:44.708655  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:44.708667  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:44.708674  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:44.712946  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:45.208740  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:45.208767  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:45.208780  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:45.208787  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:45.212825  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:45.213803  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:45.709194  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:45.709226  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:45.709239  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:45.709244  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:45.713036  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:46.209154  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:46.209181  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:46.209192  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:46.209196  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:46.212466  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:46.708677  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:46.708707  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:46.708716  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:46.708724  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:46.712340  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:47.208818  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:47.208842  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:47.208851  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:47.208857  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:47.212615  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:47.709164  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:47.709193  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:47.709202  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:47.709205  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:47.713234  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:47.713781  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:48.209498  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:48.209525  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:48.209534  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:48.209537  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:48.213755  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:48.708587  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:48.708611  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:48.708621  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:48.708624  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:48.712036  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:49.208568  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:49.208592  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:49.208603  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:49.208607  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:49.211903  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:49.708691  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:49.708716  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:49.708725  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:49.708729  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:49.712776  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:50.208877  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:50.208902  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:50.208911  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:50.208914  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:50.212493  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:50.213081  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:50.709538  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:50.709562  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:50.709571  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:50.709575  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:50.713279  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:51.209230  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:51.209256  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:51.209265  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:51.209268  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:51.213382  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:51.708830  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:51.708854  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:51.708862  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:51.708866  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:51.712240  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:52.208900  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:52.208926  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:52.208939  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:52.208946  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:52.215313  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:52.216003  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:52.708705  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:52.708730  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:52.708738  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:52.708742  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:52.712616  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:53.209443  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:53.209470  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:53.209480  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:53.209484  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:53.220542  401591 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:09:53.709519  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:53.709546  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:53.709558  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:53.709564  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:53.716163  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:54.208707  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:54.208734  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:54.208746  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:54.208760  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:54.213435  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:54.708587  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:54.708610  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:54.708619  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:54.708622  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:54.712056  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:54.712859  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:55.209203  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:55.209231  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:55.209239  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:55.209245  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:55.212768  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:55.708667  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:55.708695  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:55.708703  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:55.708707  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:55.712313  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.209354  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:56.209383  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.209395  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.209403  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.213377  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.708881  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:56.708908  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.708919  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.708924  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.712370  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.712935  401591 node_ready.go:49] node "ha-628553-m03" has status "Ready":"True"
	I1007 12:09:56.712963  401591 node_ready.go:38] duration metric: took 15.504655916s for node "ha-628553-m03" to be "Ready" ...
	I1007 12:09:56.712977  401591 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:09:56.713073  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:09:56.713085  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.713097  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.713103  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.718978  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:09:56.726344  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.726456  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:09:56.726466  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.726474  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.726490  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.730546  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:56.731604  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.731626  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.731635  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.731641  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.735028  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.735631  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.735652  401591 pod_ready.go:82] duration metric: took 9.273238ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.735664  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.735733  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:09:56.735741  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.735750  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.735755  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.739406  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.740176  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.740199  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.740209  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.740214  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.743560  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.744246  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.744282  401591 pod_ready.go:82] duration metric: took 8.60988ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.744297  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.744377  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:09:56.744385  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.744394  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.744399  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.747762  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.748602  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.748620  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.748631  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.748635  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.751819  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.752620  401591 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.752643  401591 pod_ready.go:82] duration metric: took 8.33893ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.752653  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.752721  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:09:56.752728  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.752736  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.752744  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.755841  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.756900  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:56.756919  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.756928  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.756933  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.762051  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:09:56.762546  401591 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.762567  401591 pod_ready.go:82] duration metric: took 9.907016ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.762577  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.908942  401591 request.go:632] Waited for 146.263139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:09:56.909015  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:09:56.909020  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.909028  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.909033  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.912564  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.109760  401591 request.go:632] Waited for 196.38743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:57.109828  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:57.109833  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.109841  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.109845  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.113445  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.114014  401591 pod_ready.go:93] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.114033  401591 pod_ready.go:82] duration metric: took 351.449136ms for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.114057  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.309353  401591 request.go:632] Waited for 195.205622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:09:57.309419  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:09:57.309425  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.309432  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.309437  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.313075  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.509082  401591 request.go:632] Waited for 195.305317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:57.509151  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:57.509155  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.509166  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.509174  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.512625  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.513112  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.513132  401591 pod_ready.go:82] duration metric: took 399.067745ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.513143  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.709708  401591 request.go:632] Waited for 196.474408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:09:57.709781  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:09:57.709786  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.709794  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.709800  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.713831  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:57.908898  401591 request.go:632] Waited for 194.228676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:57.908982  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:57.908989  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.909010  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.909018  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.912443  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.912928  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.912946  401591 pod_ready.go:82] duration metric: took 399.796848ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.912957  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.109126  401591 request.go:632] Waited for 196.089672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:09:58.109228  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:09:58.109239  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.109254  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.109263  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.113302  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:58.309458  401591 request.go:632] Waited for 195.377342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:58.309526  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:58.309532  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.309540  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.309547  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.313264  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.313917  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:58.313941  401591 pod_ready.go:82] duration metric: took 400.976971ms for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.313953  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.508886  401591 request.go:632] Waited for 194.833329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:09:58.508952  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:09:58.508957  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.508965  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.508968  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.512699  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.709582  401591 request.go:632] Waited for 196.246847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:58.709646  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:58.709651  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.709659  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.709664  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.713267  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.713852  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:58.713872  401591 pod_ready.go:82] duration metric: took 399.911675ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.713882  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.909557  401591 request.go:632] Waited for 195.589727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:09:58.909638  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:09:58.909646  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.909658  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.909667  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.913323  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.109300  401591 request.go:632] Waited for 195.248412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:59.109385  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:59.109397  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.109413  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.109423  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.113724  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.114391  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.114424  401591 pod_ready.go:82] duration metric: took 400.532344ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.114440  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.309421  401591 request.go:632] Waited for 194.863237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:09:59.309496  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:09:59.309505  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.309513  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.309517  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.313524  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.509863  401591 request.go:632] Waited for 195.376113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.509933  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.509939  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.509947  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.509952  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.514238  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.514980  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.515006  401591 pod_ready.go:82] duration metric: took 400.556348ms for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.515021  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.708902  401591 request.go:632] Waited for 193.788377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:09:59.708979  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:09:59.708984  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.708994  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.708999  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.713254  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.909528  401591 request.go:632] Waited for 195.290175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.909618  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.909629  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.909647  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.909670  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.913334  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.913821  401591 pod_ready.go:93] pod "kube-proxy-956k4" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.913839  401591 pod_ready.go:82] duration metric: took 398.810891ms for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.913849  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.108920  401591 request.go:632] Waited for 194.960284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:10:00.108989  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:10:00.108994  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.109003  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.109008  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.112562  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.309314  401591 request.go:632] Waited for 195.880007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:00.309383  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:00.309388  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.309398  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.309402  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.312741  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.313358  401591 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:00.313387  401591 pod_ready.go:82] duration metric: took 399.529803ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.313403  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.509443  401591 request.go:632] Waited for 195.933785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:10:00.509525  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:10:00.509534  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.509546  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.509553  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.513184  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.709406  401591 request.go:632] Waited for 195.365479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:00.709504  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:00.709514  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.709522  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.709529  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.713607  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:00.714279  401591 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:00.714309  401591 pod_ready.go:82] duration metric: took 400.896557ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.714325  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.909245  401591 request.go:632] Waited for 194.818143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:10:00.909342  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:10:00.909351  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.909364  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.909371  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.915481  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:01.109624  401591 request.go:632] Waited for 193.409101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:01.109691  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:01.109697  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.109705  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.109709  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.113699  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.114360  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.114385  401591 pod_ready.go:82] duration metric: took 400.050276ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.114400  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.309693  401591 request.go:632] Waited for 195.205987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:10:01.309795  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:10:01.309803  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.309815  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.309822  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.313815  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.508909  401591 request.go:632] Waited for 194.37677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:01.508986  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:01.508991  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.509002  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.509007  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.512742  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.513256  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.513276  401591 pod_ready.go:82] duration metric: took 398.86838ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.513288  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.709917  401591 request.go:632] Waited for 196.548883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:10:01.710017  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:10:01.710026  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.710034  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.710039  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.714122  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:01.909434  401591 request.go:632] Waited for 194.3948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:10:01.909513  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:10:01.909522  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.909532  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.909540  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.913611  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:01.914046  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.914070  401591 pod_ready.go:82] duration metric: took 400.775584ms for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.914081  401591 pod_ready.go:39] duration metric: took 5.201089226s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:10:01.914096  401591 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:10:01.914154  401591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:10:01.933363  401591 api_server.go:72] duration metric: took 20.994747532s to wait for apiserver process to appear ...
	I1007 12:10:01.933396  401591 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:10:01.933418  401591 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:10:01.938101  401591 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:10:01.938189  401591 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:10:01.938198  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.938207  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.938213  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.939122  401591 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:10:01.939199  401591 api_server.go:141] control plane version: v1.31.1
	I1007 12:10:01.939214  401591 api_server.go:131] duration metric: took 5.812529ms to wait for apiserver health ...
	I1007 12:10:01.939225  401591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:10:02.109608  401591 request.go:632] Waited for 170.278268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.109688  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.109696  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.109710  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.109721  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.116583  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:02.124470  401591 system_pods.go:59] 24 kube-system pods found
	I1007 12:10:02.124519  401591 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:10:02.124524  401591 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:10:02.124528  401591 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:10:02.124532  401591 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:10:02.124537  401591 system_pods.go:61] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:10:02.124541  401591 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:10:02.124545  401591 system_pods.go:61] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:10:02.124549  401591 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:10:02.124553  401591 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:10:02.124556  401591 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:10:02.124559  401591 system_pods.go:61] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:10:02.124563  401591 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:10:02.124566  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:10:02.124569  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:10:02.124572  401591 system_pods.go:61] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:10:02.124576  401591 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:10:02.124579  401591 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:10:02.124582  401591 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:10:02.124585  401591 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:10:02.124588  401591 system_pods.go:61] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:10:02.124591  401591 system_pods.go:61] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:10:02.124594  401591 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:10:02.124597  401591 system_pods.go:61] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:10:02.124600  401591 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:10:02.124608  401591 system_pods.go:74] duration metric: took 185.374126ms to wait for pod list to return data ...
	I1007 12:10:02.124621  401591 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:10:02.309914  401591 request.go:632] Waited for 185.18335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:10:02.309989  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:10:02.309995  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.310010  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.310017  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.318042  401591 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:10:02.318207  401591 default_sa.go:45] found service account: "default"
	I1007 12:10:02.318235  401591 default_sa.go:55] duration metric: took 193.599365ms for default service account to be created ...
	I1007 12:10:02.318250  401591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:10:02.509774  401591 request.go:632] Waited for 191.420927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.509840  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.509853  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.509866  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.509875  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.516685  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:02.523464  401591 system_pods.go:86] 24 kube-system pods found
	I1007 12:10:02.523503  401591 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:10:02.523511  401591 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:10:02.523516  401591 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:10:02.523522  401591 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:10:02.523528  401591 system_pods.go:89] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:10:02.523534  401591 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:10:02.523539  401591 system_pods.go:89] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:10:02.523573  401591 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:10:02.523579  401591 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:10:02.523585  401591 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:10:02.523591  401591 system_pods.go:89] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:10:02.523606  401591 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:10:02.523613  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:10:02.523619  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:10:02.523628  401591 system_pods.go:89] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:10:02.523634  401591 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:10:02.523640  401591 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:10:02.523651  401591 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:10:02.523657  401591 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:10:02.523662  401591 system_pods.go:89] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:10:02.523668  401591 system_pods.go:89] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:10:02.523674  401591 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:10:02.523679  401591 system_pods.go:89] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:10:02.523685  401591 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:10:02.523697  401591 system_pods.go:126] duration metric: took 205.439551ms to wait for k8s-apps to be running ...
	I1007 12:10:02.523709  401591 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:10:02.523771  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:10:02.542038  401591 system_svc.go:56] duration metric: took 18.318301ms WaitForService to wait for kubelet
	I1007 12:10:02.542084  401591 kubeadm.go:582] duration metric: took 21.603472414s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:10:02.542109  401591 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:10:02.709771  401591 request.go:632] Waited for 167.539386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:10:02.709854  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:10:02.709863  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.709874  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.709884  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.713363  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:02.714361  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714384  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714396  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714401  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714406  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714409  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714415  401591 node_conditions.go:105] duration metric: took 172.299605ms to run NodePressure ...
	I1007 12:10:02.714430  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:10:02.714459  401591 start.go:255] writing updated cluster config ...
	I1007 12:10:02.714781  401591 ssh_runner.go:195] Run: rm -f paused
	I1007 12:10:02.769817  401591 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:10:02.771879  401591 out.go:177] * Done! kubectl is now configured to use "ha-628553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.566676500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6a65a77-b046-4011-9452-c2d0260b3f87 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.568192055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3609ca50-912c-40b0-808a-174db171efe5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.568622475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303235568601227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3609ca50-912c-40b0-808a-174db171efe5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.569099968Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c248ab1a-c6b4-48da-b1c1-dcc7260d64fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.569186297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c248ab1a-c6b4-48da-b1c1-dcc7260d64fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.569431459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c248ab1a-c6b4-48da-b1c1-dcc7260d64fc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.588364600Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=40147510-b15e-42fc-86fc-e845ef9d91e2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.588656927Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-vc5k8,Uid:8b53e3fe-5dba-4b37-b415-380bb77e5fd2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728303005901177008,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:10:03.789457892Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1728302864780859896,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-07T12:07:44.460023165Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-rsr6v,Uid:60fd800f-38f1-40d5-9ecf-cbf21bf5add6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302864774195120,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:07:44.454281271Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-ktmzq,Uid:fda6ae24-5407-4f63-9a56-29fa9eba8966,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1728302864754034473,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-5407-4f63-9a56-29fa9eba8966,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:07:44.445691889Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&PodSandboxMetadata{Name:kube-proxy-h6vg8,Uid:97dd82f4-8e31-4569-b762-fc804d08efb0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302852427144114,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-10-07T12:07:32.116977623Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&PodSandboxMetadata{Name:kindnet-snf5v,Uid:a6360ec2-8f69-454b-9bfc-d636ebd8b372,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302852420116260,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:07:32.108918892Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-628553,Uid:2ddceaa845e9d579fdd80284eb5bd959,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302841260971459,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2ddceaa845e9d579fdd80284eb5bd959,kubernetes.io/config.seen: 2024-10-07T12:07:20.758175345Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-628553,Uid:fa78002d344fb10ba4bceb5ed1731c87,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302841249880714,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c8
7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fa78002d344fb10ba4bceb5ed1731c87,kubernetes.io/config.seen: 2024-10-07T12:07:20.758176542Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-628553,Uid:66183128b21172d80a580f972f2b00a0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302841247830558,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{kubernetes.io/config.hash: 66183128b21172d80a580f972f2b00a0,kubernetes.io/config.seen: 2024-10-07T12:07:20.758177451Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&PodSandboxMetadata{Name:etcd-ha-628553,Uid:0cb7efd9
8e1775704789a8938bb7525f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302841245753232,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.110:2379,kubernetes.io/config.hash: 0cb7efd98e1775704789a8938bb7525f,kubernetes.io/config.seen: 2024-10-07T12:07:20.758170351Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-628553,Uid:5f9e39492eb2c4bce38dd565366b0984,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728302841225722210,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.110:8443,kubernetes.io/config.hash: 5f9e39492eb2c4bce38dd565366b0984,kubernetes.io/config.seen: 2024-10-07T12:07:20.758174139Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=40147510-b15e-42fc-86fc-e845ef9d91e2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.593379220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb0d8b7b-abcd-46e3-9b48-2135ad364f14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.593480997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb0d8b7b-abcd-46e3-9b48-2135ad364f14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.593755861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb0d8b7b-abcd-46e3-9b48-2135ad364f14 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.616045569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e637ab93-3009-4abd-b7e1-df4f65cbccda name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.616120867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e637ab93-3009-4abd-b7e1-df4f65cbccda name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.617404158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=812c6858-f858-472c-a4c2-350b89633a7d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.617914019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303235617888802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=812c6858-f858-472c-a4c2-350b89633a7d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.618507982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b132159-e096-4d94-9c59-49ae321bf08e name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.618563808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b132159-e096-4d94-9c59-49ae321bf08e name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.618849652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b132159-e096-4d94-9c59-49ae321bf08e name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.656044137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac8daaae-597a-4d31-a9d8-b74ab6b5e5b4 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.656118436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac8daaae-597a-4d31-a9d8-b74ab6b5e5b4 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.657347288Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0979cab3-e92c-496f-b1f9-029fc5f6f9ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.657839097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303235657766289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0979cab3-e92c-496f-b1f9-029fc5f6f9ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.658623451Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ab17030-6e4b-4668-80e2-ddab159afb82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.658679994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ab17030-6e4b-4668-80e2-ddab159afb82 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:13:55 ha-628553 crio[670]: time="2024-10-07 12:13:55.659089554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ab17030-6e4b-4668-80e2-ddab159afb82 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cac09519e9d83       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   3588af1ea926c       busybox-7dff88458-vc5k8
	914d5a55b5b7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   e4273414ae3c9       storage-provisioner
	4dcac83715ae5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   7a74be057c048       coredns-7c65d6cfc9-rsr6v
	0a438e52c0996       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   66f721a704d2d       coredns-7c65d6cfc9-ktmzq
	b10875321ed8d       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   883a1bf7435de       kindnet-snf5v
	4a0b203aaca5a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   4ad2a2a2eae50       kube-proxy-h6vg8
	41e1b6a866662       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   9107fefdb6eca       kube-vip-ha-628553
	02649d86a8d5c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   e611d474900bc       etcd-ha-628553
	1a3ce3a4cad16       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   adfc5c5b9565a       kube-scheduler-ha-628553
	73e39c7d2b39b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   ce8ef37c98c4f       kube-controller-manager-ha-628553
	919f5b2c17a09       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   923ba0f2be002       kube-apiserver-ha-628553
	
	
	==> coredns [0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68] <==
	[INFO] 10.244.1.2:59173 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004406792s
	[INFO] 10.244.1.2:44478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000424413s
	[INFO] 10.244.1.2:58960 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000183491s
	[INFO] 10.244.1.3:35630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000291506s
	[INFO] 10.244.1.3:42806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002399052s
	[INFO] 10.244.1.3:42397 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126644s
	[INFO] 10.244.1.3:34571 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001931949s
	[INFO] 10.244.1.3:54485 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000378487s
	[INFO] 10.244.1.3:58977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105091s
	[INFO] 10.244.0.4:38892 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002053345s
	[INFO] 10.244.0.4:58836 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172655s
	[INFO] 10.244.0.4:55251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065314s
	[INFO] 10.244.0.4:53436 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001570291s
	[INFO] 10.244.0.4:48063 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00004804s
	[INFO] 10.244.1.2:57025 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153957s
	[INFO] 10.244.1.2:40431 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012349s
	[INFO] 10.244.1.3:37153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139765s
	[INFO] 10.244.1.3:45214 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157416s
	[INFO] 10.244.1.3:47978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094264s
	[INFO] 10.244.0.4:57791 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080137s
	[INFO] 10.244.1.2:51888 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215918s
	[INFO] 10.244.1.2:42893 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166709s
	[INFO] 10.244.1.3:36056 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000172229s
	[INFO] 10.244.1.3:44744 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113708s
	[INFO] 10.244.0.4:56467 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102183s
	
	
	==> coredns [4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed] <==
	[INFO] 10.244.1.3:51613 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000585499s
	[INFO] 10.244.1.3:40629 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001993531s
	[INFO] 10.244.0.4:40285 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000080316s
	[INFO] 10.244.1.2:53385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200211s
	[INFO] 10.244.1.2:46841 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028903254s
	[INFO] 10.244.1.2:36156 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000295572s
	[INFO] 10.244.1.2:46979 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159813s
	[INFO] 10.244.1.3:47839 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190478s
	[INFO] 10.244.1.3:55618 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000314649s
	[INFO] 10.244.0.4:52728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150624s
	[INFO] 10.244.0.4:42394 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090784s
	[INFO] 10.244.0.4:57656 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107027s
	[INFO] 10.244.1.2:36030 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124775s
	[INFO] 10.244.1.2:57899 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082756s
	[INFO] 10.244.1.3:44889 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195326s
	[INFO] 10.244.0.4:59043 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137163s
	[INFO] 10.244.0.4:52080 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217774s
	[INFO] 10.244.0.4:40645 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102774s
	[INFO] 10.244.1.2:59521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150669s
	[INFO] 10.244.1.2:34929 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205398s
	[INFO] 10.244.1.3:50337 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185196s
	[INFO] 10.244.1.3:51645 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000242498s
	[INFO] 10.244.0.4:58847 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134448s
	[INFO] 10.244.0.4:51647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147028s
	[INFO] 10.244.0.4:54351 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131375s
	
	
	==> describe nodes <==
	Name:               ha-628553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_07_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-628553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a13f7b7982a74b9eb8f82488f9c3d1a6
	  System UUID:                a13f7b79-82a7-4b9e-b8f8-2488f9c3d1a6
	  Boot ID:                    288ea8ab-36c4-4d6a-9093-1f2ac800cc46
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vc5k8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m52s
	  kube-system                 coredns-7c65d6cfc9-ktmzq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 coredns-7c65d6cfc9-rsr6v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 etcd-ha-628553                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m27s
	  kube-system                 kindnet-snf5v                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m23s
	  kube-system                 kube-apiserver-ha-628553             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-ha-628553    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-proxy-h6vg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-scheduler-ha-628553             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-vip-ha-628553                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m22s  kube-proxy       
	  Normal  Starting                 6m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m27s  kubelet          Node ha-628553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s  kubelet          Node ha-628553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m27s  kubelet          Node ha-628553 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s  node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  NodeReady                6m11s  kubelet          Node ha-628553 status is now: NodeReady
	  Normal  RegisteredNode           5m24s  node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  RegisteredNode           4m10s  node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	
	
	Name:               ha-628553-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_08_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:08:22 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:11:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-628553-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ba9ae7572f54f4ab8de307b6e86da52
	  System UUID:                4ba9ae75-72f5-4f4a-b8de-307b6e86da52
	  Boot ID:                    30fbb024-4877-4642-abd8-af8d3d30f079
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-75ng4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  default                     busybox-7dff88458-jhmrp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-628553-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m32s
	  kube-system                 kindnet-9rq2w                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-628553-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-controller-manager-ha-628553-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-proxy-s5c6d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-628553-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-vip-ha-628553-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m29s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node ha-628553-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node ha-628553-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x7 over 5m34s)  kubelet          Node ha-628553-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  NodeNotReady             2m                     node-controller  Node ha-628553-m02 status is now: NodeNotReady
	
	
	Name:               ha-628553-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_09_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:09:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-628553-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aab92960db1b4070940c89c6ff930351
	  System UUID:                aab92960-db1b-4070-940c-89c6ff930351
	  Boot ID:                    77629bba-9229-47e7-80cf-730097c43666
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-628553-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kindnet-sb4xd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m20s
	  kube-system                 kube-apiserver-ha-628553-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-ha-628553-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-proxy-956k4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-scheduler-ha-628553-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-vip-ha-628553-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m15s                  kube-proxy       
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m20s (x8 over 4m20s)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s (x8 over 4m20s)  kubelet          Node ha-628553-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s (x7 over 4m20s)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	
	
	Name:               ha-628553-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_10_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:10:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:11:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    ha-628553-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7e249f18a3f466abcbb6b94b02ed2ec
	  System UUID:                b7e249f1-8a3f-466a-bcbb-6b94b02ed2ec
	  Boot ID:                    dd833219-3ee8-4ed9-aae9-d441f250fa96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwk2r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m15s
	  kube-system                 kube-proxy-fkzqr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m15s (x2 over 3m15s)  kubelet          Node ha-628553-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m15s (x2 over 3m15s)  kubelet          Node ha-628553-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m15s (x2 over 3m15s)  kubelet          Node ha-628553-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  NodeReady                2m55s                  kubelet          Node ha-628553-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 12:06] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051409] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040490] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.878273] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.715451] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Oct 7 12:07] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.378547] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.061855] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066201] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.180086] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.153013] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.284998] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.180207] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +4.207557] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.061569] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.415206] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.085223] kauditd_printk_skb: 79 callbacks suppressed
	[  +4.998659] kauditd_printk_skb: 26 callbacks suppressed
	[ +12.170600] kauditd_printk_skb: 33 callbacks suppressed
	[Oct 7 12:08] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969] <==
	{"level":"warn","ts":"2024-10-07T12:13:55.936756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:55.942016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:55.944829Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:55.952967Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:55.968111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:55.977260Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:55.984274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:55.988133Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:55.990865Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:55.992383Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.061084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.068028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.074681Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.078847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.082491Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.088325Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.090009Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.095937Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.102473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.112348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.116248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.120465Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.127533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.134242Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:13:56.190652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:13:56 up 7 min,  0 users,  load average: 0.44, 0.29, 0.15
	Linux ha-628553 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e] <==
	I1007 12:13:24.296384       1 main.go:299] handling current node
	I1007 12:13:34.285463       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:34.285588       1 main.go:299] handling current node
	I1007 12:13:34.285620       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:34.285640       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:34.285850       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:34.285880       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:34.285943       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:34.285960       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:44.285393       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:44.285467       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:44.285666       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:44.285751       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:44.285880       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:44.285904       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:44.285950       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:44.285956       1 main.go:299] handling current node
	I1007 12:13:54.294585       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:54.294702       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:54.294938       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:54.294972       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:54.295048       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:54.295074       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:54.295150       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:54.295172       1 main.go:299] handling current node
	
	
	==> kube-apiserver [919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544] <==
	I1007 12:07:27.794940       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 12:07:27.933633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 12:07:32.075355       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1007 12:07:32.486677       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1007 12:08:23.102352       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.102586       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.764µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1007 12:08:23.104149       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.105567       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.106920       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.674679ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1007 12:10:08.360356       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40292: use of closed network connection
	E1007 12:10:08.561113       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40308: use of closed network connection
	E1007 12:10:08.787138       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40330: use of closed network connection
	E1007 12:10:09.028668       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40344: use of closed network connection
	E1007 12:10:09.244263       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40368: use of closed network connection
	E1007 12:10:09.466935       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40384: use of closed network connection
	E1007 12:10:09.660058       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40410: use of closed network connection
	E1007 12:10:09.852210       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40416: use of closed network connection
	E1007 12:10:10.061165       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40432: use of closed network connection
	E1007 12:10:10.408420       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40450: use of closed network connection
	E1007 12:10:10.612165       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40466: use of closed network connection
	E1007 12:10:10.805485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40472: use of closed network connection
	E1007 12:10:10.999177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40496: use of closed network connection
	E1007 12:10:11.210763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40502: use of closed network connection
	E1007 12:10:11.463496       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40532: use of closed network connection
	W1007 12:11:36.878261       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110 192.168.39.149]
	
	
	==> kube-controller-manager [73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee] <==
	I1007 12:10:41.965922       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.001526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.152486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.245459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.660674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:45.679644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:45.726419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:46.774324       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-628553-m04"
	I1007 12:10:46.775093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:46.796998       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:52.359490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:01.889908       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-628553-m04"
	I1007 12:11:01.891629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:01.908947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:02.079930       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:12.784052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:56.797865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-628553-m04"
	I1007 12:11:56.798196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:11:56.825210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:11:56.976985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.040351ms"
	I1007 12:11:56.977093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.478µs"
	I1007 12:11:57.005615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.252446ms"
	I1007 12:11:57.005705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.783µs"
	I1007 12:12:00.745939       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:12:02.094451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	
	
	==> kube-proxy [4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 12:07:33.298365       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 12:07:33.336456       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E1007 12:07:33.336571       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:07:33.434284       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 12:07:33.434331       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 12:07:33.434355       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:07:33.445592       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:07:33.454423       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:07:33.454444       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:07:33.463602       1 config.go:199] "Starting service config controller"
	I1007 12:07:33.467216       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:07:33.467268       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:07:33.467274       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:07:33.472850       1 config.go:328] "Starting node config controller"
	I1007 12:07:33.472863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:07:33.568004       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 12:07:33.568062       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:07:33.573613       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4] <==
	E1007 12:07:26.382246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:07:26.387024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 12:07:26.387119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:07:26.410415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 12:07:26.410570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 12:07:27.604975       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 12:10:03.714499       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="38d0a2a6-0d77-403c-86e7-405837d8ca25" pod="default/busybox-7dff88458-jhmrp" assumedNode="ha-628553-m02" currentNode="ha-628553-m03"
	E1007 12:10:03.740391       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jhmrp\": pod busybox-7dff88458-jhmrp is already assigned to node \"ha-628553-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jhmrp" node="ha-628553-m03"
	E1007 12:10:03.743143       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 38d0a2a6-0d77-403c-86e7-405837d8ca25(default/busybox-7dff88458-jhmrp) was assumed on ha-628553-m03 but assigned to ha-628553-m02" pod="default/busybox-7dff88458-jhmrp"
	E1007 12:10:03.745165       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jhmrp\": pod busybox-7dff88458-jhmrp is already assigned to node \"ha-628553-m02\"" pod="default/busybox-7dff88458-jhmrp"
	I1007 12:10:03.747831       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jhmrp" node="ha-628553-m02"
	E1007 12:10:03.791061       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vc5k8\": pod busybox-7dff88458-vc5k8 is already assigned to node \"ha-628553\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vc5k8" node="ha-628553-m03"
	E1007 12:10:03.791192       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vc5k8\": pod busybox-7dff88458-vc5k8 is already assigned to node \"ha-628553\"" pod="default/busybox-7dff88458-vc5k8"
	E1007 12:10:03.910449       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-47zsz\": pod busybox-7dff88458-47zsz is already assigned to node \"ha-628553-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-47zsz" node="ha-628553-m03"
	E1007 12:10:03.910515       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 674a626e-9fe6-4875-a34f-cc4d729e2bb1(default/busybox-7dff88458-47zsz) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-47zsz"
	E1007 12:10:03.910531       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-47zsz\": pod busybox-7dff88458-47zsz is already assigned to node \"ha-628553-m03\"" pod="default/busybox-7dff88458-47zsz"
	I1007 12:10:03.910555       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-47zsz" node="ha-628553-m03"
	E1007 12:10:42.040635       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwk2r\": pod kindnet-rwk2r is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwk2r" node="ha-628553-m04"
	E1007 12:10:42.042987       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwk2r\": pod kindnet-rwk2r is already assigned to node \"ha-628553-m04\"" pod="kube-system/kindnet-rwk2r"
	E1007 12:10:42.079633       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kl4j4\": pod kindnet-kl4j4 is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kl4j4" node="ha-628553-m04"
	E1007 12:10:42.079724       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 244c4da8-46b7-4627-a7ad-60e7ff405b0a(kube-system/kindnet-kl4j4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kl4j4"
	E1007 12:10:42.079846       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kl4j4\": pod kindnet-kl4j4 is already assigned to node \"ha-628553-m04\"" pod="kube-system/kindnet-kl4j4"
	I1007 12:10:42.079871       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kl4j4" node="ha-628553-m04"
	E1007 12:10:42.086167       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-g2fwp\": pod kube-proxy-g2fwp is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-g2fwp" node="ha-628553-m04"
	E1007 12:10:42.086272       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-g2fwp\": pod kube-proxy-g2fwp is already assigned to node \"ha-628553-m04\"" pod="kube-system/kube-proxy-g2fwp"
	
	
	==> kubelet <==
	Oct 07 12:12:27 ha-628553 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:12:27 ha-628553 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:12:28 ha-628553 kubelet[1314]: E1007 12:12:28.044744    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303148044534034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:28 ha-628553 kubelet[1314]: E1007 12:12:28.044838    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303148044534034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:38 ha-628553 kubelet[1314]: E1007 12:12:38.050523    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303158047005260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:38 ha-628553 kubelet[1314]: E1007 12:12:38.051561    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303158047005260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:48 ha-628553 kubelet[1314]: E1007 12:12:48.053900    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303168053449361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:48 ha-628553 kubelet[1314]: E1007 12:12:48.053963    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303168053449361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:58 ha-628553 kubelet[1314]: E1007 12:12:58.055856    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303178055537621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:58 ha-628553 kubelet[1314]: E1007 12:12:58.055895    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303178055537621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:08 ha-628553 kubelet[1314]: E1007 12:13:08.057102    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303188056723208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:08 ha-628553 kubelet[1314]: E1007 12:13:08.057351    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303188056723208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:18 ha-628553 kubelet[1314]: E1007 12:13:18.061478    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303198060609364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:18 ha-628553 kubelet[1314]: E1007 12:13:18.061853    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303198060609364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:27 ha-628553 kubelet[1314]: E1007 12:13:27.990111    1314 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:13:27 ha-628553 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:13:28 ha-628553 kubelet[1314]: E1007 12:13:28.063998    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303208063333958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:28 ha-628553 kubelet[1314]: E1007 12:13:28.064098    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303208063333958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:38 ha-628553 kubelet[1314]: E1007 12:13:38.066580    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303218065435839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:38 ha-628553 kubelet[1314]: E1007 12:13:38.066632    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303218065435839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:48 ha-628553 kubelet[1314]: E1007 12:13:48.067728    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303228067468647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:48 ha-628553 kubelet[1314]: E1007 12:13:48.067868    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303228067468647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-628553 -n ha-628553
helpers_test.go:261: (dbg) Run:  kubectl --context ha-628553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.796455562s)
ha_test.go:309: expected profile "ha-628553" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-628553\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-628553\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-628553\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.110\",\"Port\":8443,\"Kubern
etesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.169\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.149\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.119\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"
metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\"
:262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-628553 -n ha-628553
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-628553 logs -n 25: (1.464913085s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553:/home/docker/cp-test_ha-628553-m03_ha-628553.txt                       |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553 sudo cat                                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553.txt                                 |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m02:/home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m04 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp testdata/cp-test.txt                                                | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553:/home/docker/cp-test_ha-628553-m04_ha-628553.txt                       |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553 sudo cat                                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553.txt                                 |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m02:/home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03:/home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m03 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-628553 node stop m02 -v=7                                                     | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-628553 node start m02 -v=7                                                    | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:06:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:06:46.248953  401591 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:06:46.249102  401591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:46.249113  401591 out.go:358] Setting ErrFile to fd 2...
	I1007 12:06:46.249117  401591 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:06:46.249326  401591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:06:46.249966  401591 out.go:352] Setting JSON to false
	I1007 12:06:46.250938  401591 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6552,"bootTime":1728296254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:06:46.251073  401591 start.go:139] virtualization: kvm guest
	I1007 12:06:46.253469  401591 out.go:177] * [ha-628553] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:06:46.255142  401591 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:06:46.255180  401591 notify.go:220] Checking for updates...
	I1007 12:06:46.257412  401591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:06:46.258630  401591 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:06:46.259784  401591 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.261129  401591 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:06:46.262379  401591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:06:46.263655  401591 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:06:46.300943  401591 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 12:06:46.302472  401591 start.go:297] selected driver: kvm2
	I1007 12:06:46.302493  401591 start.go:901] validating driver "kvm2" against <nil>
	I1007 12:06:46.302513  401591 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:06:46.303566  401591 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:06:46.303697  401591 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:06:46.319358  401591 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:06:46.319408  401591 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:06:46.319656  401591 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:06:46.319692  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:06:46.319741  401591 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 12:06:46.319766  401591 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 12:06:46.319825  401591 start.go:340] cluster config:
	{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1007 12:06:46.319936  401591 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:06:46.321805  401591 out.go:177] * Starting "ha-628553" primary control-plane node in "ha-628553" cluster
	I1007 12:06:46.323163  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:06:46.323208  401591 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:06:46.323219  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:06:46.323305  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:06:46.323316  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:06:46.323679  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:06:46.323704  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json: {Name:mk2a07965de558fa93dada604e58b87e56b9c04c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:06:46.323847  401591 start.go:360] acquireMachinesLock for ha-628553: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:06:46.323875  401591 start.go:364] duration metric: took 15.967µs to acquireMachinesLock for "ha-628553"
	I1007 12:06:46.323891  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:06:46.323965  401591 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 12:06:46.325764  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:06:46.325922  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:06:46.325971  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:06:46.341278  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I1007 12:06:46.341788  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:06:46.342304  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:06:46.342327  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:06:46.342728  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:06:46.342902  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:06:46.343093  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:06:46.343232  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:06:46.343262  401591 client.go:168] LocalClient.Create starting
	I1007 12:06:46.343300  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:06:46.343339  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:06:46.343361  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:06:46.343431  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:06:46.343449  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:06:46.343461  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:06:46.343477  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:06:46.343525  401591 main.go:141] libmachine: (ha-628553) Calling .PreCreateCheck
	I1007 12:06:46.343857  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:06:46.344200  401591 main.go:141] libmachine: Creating machine...
	I1007 12:06:46.344213  401591 main.go:141] libmachine: (ha-628553) Calling .Create
	I1007 12:06:46.344334  401591 main.go:141] libmachine: (ha-628553) Creating KVM machine...
	I1007 12:06:46.345527  401591 main.go:141] libmachine: (ha-628553) DBG | found existing default KVM network
	I1007 12:06:46.346242  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.346122  401614 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015bb0}
	I1007 12:06:46.346346  401591 main.go:141] libmachine: (ha-628553) DBG | created network xml: 
	I1007 12:06:46.346370  401591 main.go:141] libmachine: (ha-628553) DBG | <network>
	I1007 12:06:46.346380  401591 main.go:141] libmachine: (ha-628553) DBG |   <name>mk-ha-628553</name>
	I1007 12:06:46.346391  401591 main.go:141] libmachine: (ha-628553) DBG |   <dns enable='no'/>
	I1007 12:06:46.346402  401591 main.go:141] libmachine: (ha-628553) DBG |   
	I1007 12:06:46.346407  401591 main.go:141] libmachine: (ha-628553) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1007 12:06:46.346415  401591 main.go:141] libmachine: (ha-628553) DBG |     <dhcp>
	I1007 12:06:46.346420  401591 main.go:141] libmachine: (ha-628553) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1007 12:06:46.346428  401591 main.go:141] libmachine: (ha-628553) DBG |     </dhcp>
	I1007 12:06:46.346439  401591 main.go:141] libmachine: (ha-628553) DBG |   </ip>
	I1007 12:06:46.346452  401591 main.go:141] libmachine: (ha-628553) DBG |   
	I1007 12:06:46.346459  401591 main.go:141] libmachine: (ha-628553) DBG | </network>
	I1007 12:06:46.346484  401591 main.go:141] libmachine: (ha-628553) DBG | 
	I1007 12:06:46.351921  401591 main.go:141] libmachine: (ha-628553) DBG | trying to create private KVM network mk-ha-628553 192.168.39.0/24...
	I1007 12:06:46.427414  401591 main.go:141] libmachine: (ha-628553) DBG | private KVM network mk-ha-628553 192.168.39.0/24 created
	I1007 12:06:46.427467  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.427375  401614 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.427482  401591 main.go:141] libmachine: (ha-628553) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 ...
	I1007 12:06:46.427511  401591 main.go:141] libmachine: (ha-628553) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:06:46.427534  401591 main.go:141] libmachine: (ha-628553) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:06:46.734984  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.734782  401614 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa...
	I1007 12:06:46.872452  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.872289  401614 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/ha-628553.rawdisk...
	I1007 12:06:46.872482  401591 main.go:141] libmachine: (ha-628553) DBG | Writing magic tar header
	I1007 12:06:46.872494  401591 main.go:141] libmachine: (ha-628553) DBG | Writing SSH key tar header
	I1007 12:06:46.872500  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:46.872414  401614 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 ...
	I1007 12:06:46.872528  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553
	I1007 12:06:46.872550  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553 (perms=drwx------)
	I1007 12:06:46.872558  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:06:46.872571  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:06:46.872585  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:06:46.872599  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:06:46.872642  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:06:46.872667  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:06:46.872679  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:06:46.872704  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:06:46.872718  401591 main.go:141] libmachine: (ha-628553) DBG | Checking permissions on dir: /home
	I1007 12:06:46.872731  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:06:46.872746  401591 main.go:141] libmachine: (ha-628553) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:06:46.872756  401591 main.go:141] libmachine: (ha-628553) Creating domain...
	I1007 12:06:46.872770  401591 main.go:141] libmachine: (ha-628553) DBG | Skipping /home - not owner
	I1007 12:06:46.873981  401591 main.go:141] libmachine: (ha-628553) define libvirt domain using xml: 
	I1007 12:06:46.874013  401591 main.go:141] libmachine: (ha-628553) <domain type='kvm'>
	I1007 12:06:46.874020  401591 main.go:141] libmachine: (ha-628553)   <name>ha-628553</name>
	I1007 12:06:46.874024  401591 main.go:141] libmachine: (ha-628553)   <memory unit='MiB'>2200</memory>
	I1007 12:06:46.874029  401591 main.go:141] libmachine: (ha-628553)   <vcpu>2</vcpu>
	I1007 12:06:46.874033  401591 main.go:141] libmachine: (ha-628553)   <features>
	I1007 12:06:46.874038  401591 main.go:141] libmachine: (ha-628553)     <acpi/>
	I1007 12:06:46.874041  401591 main.go:141] libmachine: (ha-628553)     <apic/>
	I1007 12:06:46.874076  401591 main.go:141] libmachine: (ha-628553)     <pae/>
	I1007 12:06:46.874106  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874128  401591 main.go:141] libmachine: (ha-628553)   </features>
	I1007 12:06:46.874148  401591 main.go:141] libmachine: (ha-628553)   <cpu mode='host-passthrough'>
	I1007 12:06:46.874160  401591 main.go:141] libmachine: (ha-628553)   
	I1007 12:06:46.874169  401591 main.go:141] libmachine: (ha-628553)   </cpu>
	I1007 12:06:46.874177  401591 main.go:141] libmachine: (ha-628553)   <os>
	I1007 12:06:46.874184  401591 main.go:141] libmachine: (ha-628553)     <type>hvm</type>
	I1007 12:06:46.874189  401591 main.go:141] libmachine: (ha-628553)     <boot dev='cdrom'/>
	I1007 12:06:46.874195  401591 main.go:141] libmachine: (ha-628553)     <boot dev='hd'/>
	I1007 12:06:46.874201  401591 main.go:141] libmachine: (ha-628553)     <bootmenu enable='no'/>
	I1007 12:06:46.874209  401591 main.go:141] libmachine: (ha-628553)   </os>
	I1007 12:06:46.874217  401591 main.go:141] libmachine: (ha-628553)   <devices>
	I1007 12:06:46.874227  401591 main.go:141] libmachine: (ha-628553)     <disk type='file' device='cdrom'>
	I1007 12:06:46.874240  401591 main.go:141] libmachine: (ha-628553)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/boot2docker.iso'/>
	I1007 12:06:46.874254  401591 main.go:141] libmachine: (ha-628553)       <target dev='hdc' bus='scsi'/>
	I1007 12:06:46.874286  401591 main.go:141] libmachine: (ha-628553)       <readonly/>
	I1007 12:06:46.874302  401591 main.go:141] libmachine: (ha-628553)     </disk>
	I1007 12:06:46.874308  401591 main.go:141] libmachine: (ha-628553)     <disk type='file' device='disk'>
	I1007 12:06:46.874314  401591 main.go:141] libmachine: (ha-628553)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:06:46.874328  401591 main.go:141] libmachine: (ha-628553)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/ha-628553.rawdisk'/>
	I1007 12:06:46.874335  401591 main.go:141] libmachine: (ha-628553)       <target dev='hda' bus='virtio'/>
	I1007 12:06:46.874340  401591 main.go:141] libmachine: (ha-628553)     </disk>
	I1007 12:06:46.874346  401591 main.go:141] libmachine: (ha-628553)     <interface type='network'>
	I1007 12:06:46.874352  401591 main.go:141] libmachine: (ha-628553)       <source network='mk-ha-628553'/>
	I1007 12:06:46.874358  401591 main.go:141] libmachine: (ha-628553)       <model type='virtio'/>
	I1007 12:06:46.874363  401591 main.go:141] libmachine: (ha-628553)     </interface>
	I1007 12:06:46.874369  401591 main.go:141] libmachine: (ha-628553)     <interface type='network'>
	I1007 12:06:46.874375  401591 main.go:141] libmachine: (ha-628553)       <source network='default'/>
	I1007 12:06:46.874381  401591 main.go:141] libmachine: (ha-628553)       <model type='virtio'/>
	I1007 12:06:46.874386  401591 main.go:141] libmachine: (ha-628553)     </interface>
	I1007 12:06:46.874395  401591 main.go:141] libmachine: (ha-628553)     <serial type='pty'>
	I1007 12:06:46.874400  401591 main.go:141] libmachine: (ha-628553)       <target port='0'/>
	I1007 12:06:46.874409  401591 main.go:141] libmachine: (ha-628553)     </serial>
	I1007 12:06:46.874429  401591 main.go:141] libmachine: (ha-628553)     <console type='pty'>
	I1007 12:06:46.874446  401591 main.go:141] libmachine: (ha-628553)       <target type='serial' port='0'/>
	I1007 12:06:46.874474  401591 main.go:141] libmachine: (ha-628553)     </console>
	I1007 12:06:46.874484  401591 main.go:141] libmachine: (ha-628553)     <rng model='virtio'>
	I1007 12:06:46.874505  401591 main.go:141] libmachine: (ha-628553)       <backend model='random'>/dev/random</backend>
	I1007 12:06:46.874515  401591 main.go:141] libmachine: (ha-628553)     </rng>
	I1007 12:06:46.874526  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874539  401591 main.go:141] libmachine: (ha-628553)     
	I1007 12:06:46.874559  401591 main.go:141] libmachine: (ha-628553)   </devices>
	I1007 12:06:46.874569  401591 main.go:141] libmachine: (ha-628553) </domain>
	I1007 12:06:46.874620  401591 main.go:141] libmachine: (ha-628553) 
	I1007 12:06:46.879724  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:6a:a7:e1 in network default
	I1007 12:06:46.880361  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:46.880382  401591 main.go:141] libmachine: (ha-628553) Ensuring networks are active...
	I1007 12:06:46.881257  401591 main.go:141] libmachine: (ha-628553) Ensuring network default is active
	I1007 12:06:46.881675  401591 main.go:141] libmachine: (ha-628553) Ensuring network mk-ha-628553 is active
	I1007 12:06:46.882336  401591 main.go:141] libmachine: (ha-628553) Getting domain xml...
	I1007 12:06:46.883247  401591 main.go:141] libmachine: (ha-628553) Creating domain...
	I1007 12:06:48.123283  401591 main.go:141] libmachine: (ha-628553) Waiting to get IP...
	I1007 12:06:48.124056  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.124511  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.124563  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.124510  401614 retry.go:31] will retry after 252.804778ms: waiting for machine to come up
	I1007 12:06:48.379035  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.379469  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.379489  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.379438  401614 retry.go:31] will retry after 356.807953ms: waiting for machine to come up
	I1007 12:06:48.738267  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:48.738722  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:48.738745  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:48.738688  401614 retry.go:31] will retry after 447.95167ms: waiting for machine to come up
	I1007 12:06:49.188519  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:49.188950  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:49.189019  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:49.188950  401614 retry.go:31] will retry after 486.200273ms: waiting for machine to come up
	I1007 12:06:49.676646  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:49.677063  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:49.677096  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:49.677017  401614 retry.go:31] will retry after 751.80427ms: waiting for machine to come up
	I1007 12:06:50.430789  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:50.431237  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:50.431260  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:50.431198  401614 retry.go:31] will retry after 897.786106ms: waiting for machine to come up
	I1007 12:06:51.330467  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:51.330831  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:51.330901  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:51.330836  401614 retry.go:31] will retry after 793.545437ms: waiting for machine to come up
	I1007 12:06:52.125725  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:52.126243  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:52.126280  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:52.126156  401614 retry.go:31] will retry after 986.036634ms: waiting for machine to come up
	I1007 12:06:53.113559  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:53.113953  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:53.113997  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:53.113901  401614 retry.go:31] will retry after 1.340335374s: waiting for machine to come up
	I1007 12:06:54.456245  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:54.456708  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:54.456732  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:54.456674  401614 retry.go:31] will retry after 1.447575739s: waiting for machine to come up
	I1007 12:06:55.906303  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:55.906806  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:55.906840  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:55.906747  401614 retry.go:31] will retry after 2.291446715s: waiting for machine to come up
	I1007 12:06:58.200323  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:06:58.200867  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:06:58.200896  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:06:58.200813  401614 retry.go:31] will retry after 2.450660794s: waiting for machine to come up
	I1007 12:07:00.654450  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:00.655019  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:07:00.655050  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:07:00.654943  401614 retry.go:31] will retry after 4.454613315s: waiting for machine to come up
	I1007 12:07:05.114240  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:05.114649  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:07:05.114678  401591 main.go:141] libmachine: (ha-628553) DBG | I1007 12:07:05.114610  401614 retry.go:31] will retry after 4.13354174s: waiting for machine to come up
	I1007 12:07:09.251818  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.252270  401591 main.go:141] libmachine: (ha-628553) Found IP for machine: 192.168.39.110
	I1007 12:07:09.252297  401591 main.go:141] libmachine: (ha-628553) Reserving static IP address...
	I1007 12:07:09.252306  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has current primary IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.252723  401591 main.go:141] libmachine: (ha-628553) DBG | unable to find host DHCP lease matching {name: "ha-628553", mac: "52:54:00:7b:12:fd", ip: "192.168.39.110"} in network mk-ha-628553
	I1007 12:07:09.328075  401591 main.go:141] libmachine: (ha-628553) DBG | Getting to WaitForSSH function...
	I1007 12:07:09.328108  401591 main.go:141] libmachine: (ha-628553) Reserved static IP address: 192.168.39.110
	I1007 12:07:09.328119  401591 main.go:141] libmachine: (ha-628553) Waiting for SSH to be available...
	I1007 12:07:09.330775  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.331429  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.331468  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.331645  401591 main.go:141] libmachine: (ha-628553) DBG | Using SSH client type: external
	I1007 12:07:09.331670  401591 main.go:141] libmachine: (ha-628553) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa (-rw-------)
	I1007 12:07:09.331710  401591 main.go:141] libmachine: (ha-628553) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:07:09.331724  401591 main.go:141] libmachine: (ha-628553) DBG | About to run SSH command:
	I1007 12:07:09.331736  401591 main.go:141] libmachine: (ha-628553) DBG | exit 0
	I1007 12:07:09.455242  401591 main.go:141] libmachine: (ha-628553) DBG | SSH cmd err, output: <nil>: 
	I1007 12:07:09.455632  401591 main.go:141] libmachine: (ha-628553) KVM machine creation complete!
	I1007 12:07:09.455937  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:07:09.456561  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:09.456802  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:09.457023  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:07:09.457043  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:09.458370  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:07:09.458386  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:07:09.458404  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:07:09.458413  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.460807  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.461171  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.461207  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.461300  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.461468  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.461645  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.461780  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.461919  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.462158  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.462173  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:07:09.562645  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:09.562687  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:07:09.562725  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.565649  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.565971  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.566008  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.566176  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.566388  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.566561  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.566676  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.566830  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.567082  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.567099  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:07:09.667847  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:07:09.667941  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:07:09.667948  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:07:09.667957  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.668229  401591 buildroot.go:166] provisioning hostname "ha-628553"
	I1007 12:07:09.668263  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.668471  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.671034  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.671389  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.671427  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.671579  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.671743  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.671923  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.672060  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.672217  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.672404  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.672417  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553 && echo "ha-628553" | sudo tee /etc/hostname
	I1007 12:07:09.786631  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553
	
	I1007 12:07:09.786665  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.789427  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.789744  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.789774  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.789989  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.790273  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.790426  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.790549  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.790707  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:09.790919  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:09.790942  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:07:09.900194  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:09.900232  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:07:09.900296  401591 buildroot.go:174] setting up certificates
	I1007 12:07:09.900321  401591 provision.go:84] configureAuth start
	I1007 12:07:09.900343  401591 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:07:09.900684  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:09.903579  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.904022  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.904048  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.904222  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.906311  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.906630  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.906658  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.906830  401591 provision.go:143] copyHostCerts
	I1007 12:07:09.906874  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:09.906920  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:07:09.906937  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:09.907109  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:07:09.907203  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:09.907224  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:07:09.907232  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:09.907258  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:07:09.907319  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:09.907341  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:07:09.907348  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:09.907368  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:07:09.907427  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553 san=[127.0.0.1 192.168.39.110 ha-628553 localhost minikube]
	I1007 12:07:09.982701  401591 provision.go:177] copyRemoteCerts
	I1007 12:07:09.982771  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:07:09.982796  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:09.985547  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.985859  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:09.985888  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:09.986044  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:09.986244  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:09.986399  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:09.986506  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.070065  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:07:10.070156  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:07:10.096714  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:07:10.096790  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1007 12:07:10.123505  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:07:10.123591  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:07:10.149487  401591 provision.go:87] duration metric: took 249.146606ms to configureAuth
	I1007 12:07:10.149524  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:07:10.149723  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:10.149836  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.152585  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.152880  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.152910  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.153069  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.153241  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.153400  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.153553  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.153691  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:10.153888  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:10.153903  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:07:10.373356  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:07:10.373398  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:07:10.373429  401591 main.go:141] libmachine: (ha-628553) Calling .GetURL
	I1007 12:07:10.374673  401591 main.go:141] libmachine: (ha-628553) DBG | Using libvirt version 6000000
	I1007 12:07:10.376989  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.377347  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.377371  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.377519  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:07:10.377531  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:07:10.377548  401591 client.go:171] duration metric: took 24.034266127s to LocalClient.Create
	I1007 12:07:10.377571  401591 start.go:167] duration metric: took 24.034341329s to libmachine.API.Create "ha-628553"
	I1007 12:07:10.377581  401591 start.go:293] postStartSetup for "ha-628553" (driver="kvm2")
	I1007 12:07:10.377593  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:07:10.377610  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.377871  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:07:10.377899  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.380000  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.380320  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.380343  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.380475  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.380648  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.380799  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.380960  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.461919  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:07:10.466913  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:07:10.466951  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:07:10.467055  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:07:10.467179  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:07:10.467195  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:07:10.467315  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:07:10.478269  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:10.503960  401591 start.go:296] duration metric: took 126.358927ms for postStartSetup
	I1007 12:07:10.504030  401591 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:07:10.504699  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:10.507315  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.507612  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.507660  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.507956  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:10.508187  401591 start.go:128] duration metric: took 24.184210305s to createHost
	I1007 12:07:10.508226  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.510480  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.510789  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.510822  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.511033  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.511256  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.511415  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.511573  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.511733  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:10.511905  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:07:10.511924  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:07:10.611827  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302830.585700119
	
	I1007 12:07:10.611860  401591 fix.go:216] guest clock: 1728302830.585700119
	I1007 12:07:10.611870  401591 fix.go:229] Guest: 2024-10-07 12:07:10.585700119 +0000 UTC Remote: 2024-10-07 12:07:10.508202357 +0000 UTC m=+24.300236101 (delta=77.497762ms)
	I1007 12:07:10.611911  401591 fix.go:200] guest clock delta is within tolerance: 77.497762ms
	I1007 12:07:10.611917  401591 start.go:83] releasing machines lock for "ha-628553", held for 24.288033555s
	I1007 12:07:10.611944  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.612216  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:10.614566  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.614868  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.614895  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.615083  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.615721  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.615950  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:10.616059  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:07:10.616101  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.616157  401591 ssh_runner.go:195] Run: cat /version.json
	I1007 12:07:10.616184  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:10.618780  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.618978  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619174  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.619193  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619348  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:10.619390  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:10.619659  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.619672  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:10.619840  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.619847  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:10.620016  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.620024  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:10.620177  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.620181  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:10.718502  401591 ssh_runner.go:195] Run: systemctl --version
	I1007 12:07:10.724799  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:07:10.886272  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:07:10.893483  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:07:10.893578  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:07:10.909850  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:07:10.909880  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:07:10.909961  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:07:10.926247  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:07:10.941251  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:07:10.941339  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:07:10.955771  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:07:10.969831  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:07:11.084350  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:07:11.233191  401591 docker.go:233] disabling docker service ...
	I1007 12:07:11.233261  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:07:11.257607  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:07:11.272121  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:07:11.404315  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:07:11.544026  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:07:11.559395  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:07:11.580516  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:07:11.580580  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.592830  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:07:11.592905  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.604197  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.615375  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.626652  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:07:11.638161  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.649289  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.668010  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:11.679654  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:07:11.690371  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:07:11.690448  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:07:11.704718  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:07:11.715762  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:11.825411  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:07:11.918378  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:07:11.918470  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:07:11.923527  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:07:11.923612  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:07:11.927764  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:07:11.977811  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:07:11.977922  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:07:12.007918  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:07:12.039043  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:07:12.040655  401591 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:07:12.043258  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:12.043618  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:12.043660  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:12.043867  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:07:12.048464  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:07:12.062293  401591 kubeadm.go:883] updating cluster {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:07:12.062486  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:07:12.062597  401591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:07:12.097470  401591 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:07:12.097555  401591 ssh_runner.go:195] Run: which lz4
	I1007 12:07:12.101992  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 12:07:12.102107  401591 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:07:12.106769  401591 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:07:12.106815  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:07:13.549777  401591 crio.go:462] duration metric: took 1.447693523s to copy over tarball
	I1007 12:07:13.549867  401591 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:07:15.620966  401591 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.071058726s)
	I1007 12:07:15.621003  401591 crio.go:469] duration metric: took 2.071194203s to extract the tarball
	I1007 12:07:15.621015  401591 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:07:15.659036  401591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:07:15.704438  401591 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:07:15.704468  401591 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:07:15.704477  401591 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.31.1 crio true true} ...
	I1007 12:07:15.704607  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:07:15.704694  401591 ssh_runner.go:195] Run: crio config
	I1007 12:07:15.754734  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:07:15.754757  401591 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:07:15.754770  401591 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:07:15.754796  401591 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-628553 NodeName:ha-628553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:07:15.754985  401591 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-628553"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:07:15.755023  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:07:15.755081  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:07:15.772386  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:07:15.772511  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:07:15.772569  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:07:15.783117  401591 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:07:15.783206  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:07:15.793430  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:07:15.811520  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:07:15.829402  401591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:07:15.846802  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1007 12:07:15.864215  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:07:15.868441  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:07:15.881667  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:16.004989  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:07:16.023767  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.110
	I1007 12:07:16.023798  401591 certs.go:194] generating shared ca certs ...
	I1007 12:07:16.023817  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.023995  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:07:16.024043  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:07:16.024055  401591 certs.go:256] generating profile certs ...
	I1007 12:07:16.024128  401591 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:07:16.024144  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt with IP's: []
	I1007 12:07:16.480073  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt ...
	I1007 12:07:16.480107  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt: {Name:mkfb027cfd899ceeb19712c80d47ef46bbe4c190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.480291  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key ...
	I1007 12:07:16.480303  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key: {Name:mk472c4daf268a3e203f7108e0ee108260fa3747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.480379  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105
	I1007 12:07:16.480394  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.254]
	I1007 12:07:16.560831  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 ...
	I1007 12:07:16.560865  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105: {Name:mkda56599207690099e4c299c085dc0644ef658a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.561026  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105 ...
	I1007 12:07:16.561038  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105: {Name:mk95b3f2a966eb67f31cfddf5b506b130fe9bd62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.561111  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.4812d105 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:07:16.561219  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.4812d105 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:07:16.561278  401591 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:07:16.561293  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt with IP's: []
	I1007 12:07:16.724627  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt ...
	I1007 12:07:16.724663  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt: {Name:mka4b333091a10b550ae6d13ed243d08adf6256b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.724831  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key ...
	I1007 12:07:16.724852  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key: {Name:mk6b2bcdf33ba7c4b6b9286fdc19a9d76a966caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:16.724932  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:07:16.724949  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:07:16.724963  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:07:16.724977  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:07:16.724990  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:07:16.725004  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:07:16.725016  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:07:16.725028  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:07:16.725075  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:07:16.725108  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:07:16.725118  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:07:16.725153  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:07:16.725179  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:07:16.725216  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:07:16.725253  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:16.725329  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:07:16.725350  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:07:16.725362  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:16.726018  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:07:16.753427  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:07:16.781404  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:07:16.817294  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:07:16.847559  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:07:16.873440  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:07:16.900479  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:07:16.927096  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:07:16.955843  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:07:16.983339  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:07:17.013360  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:07:17.041294  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:07:17.061373  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:07:17.067955  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:07:17.081953  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.087146  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.087222  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:07:17.094009  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:07:17.108332  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:07:17.122877  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.128622  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.128708  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:07:17.136010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:07:17.150544  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:07:17.165028  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.170897  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.170982  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:07:17.177949  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:07:17.192554  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:07:17.197582  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:07:17.197639  401591 kubeadm.go:392] StartCluster: {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:07:17.197720  401591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:07:17.197783  401591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:07:17.244966  401591 cri.go:89] found id: ""
	I1007 12:07:17.245041  401591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:07:17.257993  401591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:07:17.270516  401591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:07:17.282873  401591 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:07:17.282897  401591 kubeadm.go:157] found existing configuration files:
	
	I1007 12:07:17.282953  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:07:17.293921  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:07:17.294014  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:07:17.305489  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:07:17.315800  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:07:17.315863  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:07:17.326391  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:07:17.336609  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:07:17.336691  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:07:17.347761  401591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:07:17.358288  401591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:07:17.358369  401591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:07:17.369688  401591 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 12:07:17.494169  401591 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 12:07:17.494284  401591 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 12:07:17.626708  401591 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 12:07:17.626813  401591 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 12:07:17.626906  401591 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 12:07:17.639261  401591 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 12:07:17.853154  401591 out.go:235]   - Generating certificates and keys ...
	I1007 12:07:17.853313  401591 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 12:07:17.853396  401591 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 12:07:17.853510  401591 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 12:07:17.853594  401591 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 12:07:18.070639  401591 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 12:07:18.133955  401591 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 12:07:18.493727  401591 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 12:07:18.493854  401591 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-628553 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1007 12:07:18.624521  401591 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 12:07:18.624725  401591 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-628553 localhost] and IPs [192.168.39.110 127.0.0.1 ::1]
	I1007 12:07:18.772457  401591 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 12:07:19.133450  401591 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 12:07:19.279063  401591 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 12:07:19.279188  401591 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 12:07:19.348410  401591 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 12:07:19.574804  401591 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 12:07:19.645430  401591 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 12:07:19.894630  401591 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 12:07:20.065666  401591 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 12:07:20.066298  401591 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 12:07:20.071555  401591 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 12:07:20.073562  401591 out.go:235]   - Booting up control plane ...
	I1007 12:07:20.073670  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 12:07:20.073742  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 12:07:20.073803  401591 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 12:07:20.089334  401591 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 12:07:20.096504  401591 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 12:07:20.096582  401591 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 12:07:20.238757  401591 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 12:07:20.238922  401591 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 12:07:21.247383  401591 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.007919898s
	I1007 12:07:21.247485  401591 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 12:07:26.913696  401591 kubeadm.go:310] [api-check] The API server is healthy after 5.671139192s
	I1007 12:07:26.932589  401591 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 12:07:26.948791  401591 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 12:07:27.494371  401591 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 12:07:27.494637  401591 kubeadm.go:310] [mark-control-plane] Marking the node ha-628553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 12:07:27.512639  401591 kubeadm.go:310] [bootstrap-token] Using token: jd5sg7.ynaw0s6f9h2yr29w
	I1007 12:07:27.514508  401591 out.go:235]   - Configuring RBAC rules ...
	I1007 12:07:27.514678  401591 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 12:07:27.527273  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 12:07:27.537651  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 12:07:27.542026  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 12:07:27.545879  401591 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 12:07:27.550174  401591 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 12:07:27.568355  401591 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 12:07:27.807712  401591 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 12:07:28.321610  401591 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 12:07:28.321657  401591 kubeadm.go:310] 
	I1007 12:07:28.321720  401591 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 12:07:28.321728  401591 kubeadm.go:310] 
	I1007 12:07:28.321852  401591 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 12:07:28.321870  401591 kubeadm.go:310] 
	I1007 12:07:28.321904  401591 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 12:07:28.321987  401591 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 12:07:28.322064  401591 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 12:07:28.322074  401591 kubeadm.go:310] 
	I1007 12:07:28.322155  401591 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 12:07:28.322171  401591 kubeadm.go:310] 
	I1007 12:07:28.322225  401591 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 12:07:28.322234  401591 kubeadm.go:310] 
	I1007 12:07:28.322293  401591 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 12:07:28.322386  401591 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 12:07:28.322471  401591 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 12:07:28.322481  401591 kubeadm.go:310] 
	I1007 12:07:28.322608  401591 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 12:07:28.322677  401591 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 12:07:28.322684  401591 kubeadm.go:310] 
	I1007 12:07:28.322753  401591 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jd5sg7.ynaw0s6f9h2yr29w \
	I1007 12:07:28.322898  401591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 \
	I1007 12:07:28.322931  401591 kubeadm.go:310] 	--control-plane 
	I1007 12:07:28.322941  401591 kubeadm.go:310] 
	I1007 12:07:28.323057  401591 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 12:07:28.323067  401591 kubeadm.go:310] 
	I1007 12:07:28.323165  401591 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jd5sg7.ynaw0s6f9h2yr29w \
	I1007 12:07:28.323318  401591 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 
	I1007 12:07:28.324193  401591 kubeadm.go:310] W1007 12:07:17.473376     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:07:28.324456  401591 kubeadm.go:310] W1007 12:07:17.474417     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:07:28.324568  401591 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:07:28.324604  401591 cni.go:84] Creating CNI manager for ""
	I1007 12:07:28.324616  401591 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 12:07:28.326463  401591 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 12:07:28.327680  401591 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 12:07:28.333563  401591 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 12:07:28.333587  401591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 12:07:28.357058  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 12:07:28.763710  401591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:07:28.763800  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:28.763837  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553 minikube.k8s.io/updated_at=2024_10_07T12_07_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=true
	I1007 12:07:28.789823  401591 ops.go:34] apiserver oom_adj: -16
	I1007 12:07:28.939139  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:29.440288  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:29.939479  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:30.440099  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:30.940243  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:31.439830  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:31.939544  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:32.439274  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:07:32.691661  401591 kubeadm.go:1113] duration metric: took 3.927936335s to wait for elevateKubeSystemPrivileges
	I1007 12:07:32.691702  401591 kubeadm.go:394] duration metric: took 15.494065691s to StartCluster
	I1007 12:07:32.691720  401591 settings.go:142] acquiring lock: {Name:mk1ff033f29b570679652ae5ee30e0799b0658dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:32.691805  401591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:07:32.694409  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:07:32.695052  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 12:07:32.695056  401591 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:07:32.695093  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:07:32.695116  401591 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:07:32.695224  401591 addons.go:69] Setting storage-provisioner=true in profile "ha-628553"
	I1007 12:07:32.695233  401591 addons.go:69] Setting default-storageclass=true in profile "ha-628553"
	I1007 12:07:32.695246  401591 addons.go:234] Setting addon storage-provisioner=true in "ha-628553"
	I1007 12:07:32.695276  401591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-628553"
	I1007 12:07:32.695321  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:32.695278  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:07:32.695828  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.695856  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.695880  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.695904  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.713283  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41975
	I1007 12:07:32.713330  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41139
	I1007 12:07:32.713795  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.713821  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.714372  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.714404  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.714470  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.714495  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.714860  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.714918  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.715087  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.715613  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.715671  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.717649  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:07:32.717950  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:07:32.718459  401591 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 12:07:32.718801  401591 addons.go:234] Setting addon default-storageclass=true in "ha-628553"
	I1007 12:07:32.718846  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:07:32.719253  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.719305  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.733464  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45313
	I1007 12:07:32.734011  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.734570  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.734597  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.734946  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.735147  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.736496  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38269
	I1007 12:07:32.736815  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:32.737247  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.737699  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.737724  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.738090  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.738558  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:32.738606  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:32.739129  401591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:07:32.740633  401591 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:07:32.740659  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:07:32.740683  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:32.744392  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.744885  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:32.744914  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.745085  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:32.745311  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:32.745493  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:32.745635  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:32.755450  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33449
	I1007 12:07:32.756180  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:32.756775  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:32.756839  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:32.757215  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:32.757439  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:07:32.759112  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:07:32.759361  401591 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:07:32.759380  401591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:07:32.759399  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:07:32.761925  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.762241  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:07:32.762266  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:07:32.762381  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:07:32.762573  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:07:32.762681  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:07:32.762803  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:07:32.893511  401591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:07:32.927665  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 12:07:32.930086  401591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:07:33.749725  401591 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1007 12:07:33.749834  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.749857  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750070  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750085  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750150  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.750183  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750217  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750228  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750239  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750364  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750400  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750412  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.750420  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.750560  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.750625  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750637  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750639  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.750662  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.750758  401591 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:07:33.750779  401591 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:07:33.750910  401591 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1007 12:07:33.750920  401591 round_trippers.go:469] Request Headers:
	I1007 12:07:33.750933  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:07:33.750938  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:07:33.762601  401591 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:07:33.763351  401591 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1007 12:07:33.763370  401591 round_trippers.go:469] Request Headers:
	I1007 12:07:33.763378  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:07:33.763383  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:07:33.763386  401591 round_trippers.go:473]     Content-Type: application/json
	I1007 12:07:33.766118  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:07:33.766300  401591 main.go:141] libmachine: Making call to close driver server
	I1007 12:07:33.766313  401591 main.go:141] libmachine: (ha-628553) Calling .Close
	I1007 12:07:33.766629  401591 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:07:33.766646  401591 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:07:33.766684  401591 main.go:141] libmachine: (ha-628553) DBG | Closing plugin on server side
	I1007 12:07:33.768511  401591 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 12:07:33.770162  401591 addons.go:510] duration metric: took 1.075047661s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 12:07:33.770212  401591 start.go:246] waiting for cluster config update ...
	I1007 12:07:33.770227  401591 start.go:255] writing updated cluster config ...
	I1007 12:07:33.772026  401591 out.go:201] 
	I1007 12:07:33.773570  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:33.773647  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:33.775167  401591 out.go:177] * Starting "ha-628553-m02" control-plane node in "ha-628553" cluster
	I1007 12:07:33.776386  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:07:33.776419  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:07:33.776564  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:07:33.776577  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:07:33.776670  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:33.776889  401591 start.go:360] acquireMachinesLock for ha-628553-m02: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:07:33.776949  401591 start.go:364] duration metric: took 33.552µs to acquireMachinesLock for "ha-628553-m02"
	I1007 12:07:33.776978  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:07:33.777088  401591 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1007 12:07:33.779624  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:07:33.779742  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:07:33.779791  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:07:33.795004  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I1007 12:07:33.795415  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:07:33.795909  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:07:33.795931  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:07:33.796264  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:07:33.796498  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:33.796628  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:33.796770  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:07:33.796805  401591 client.go:168] LocalClient.Create starting
	I1007 12:07:33.796847  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:07:33.796894  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:07:33.796911  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:07:33.796968  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:07:33.796987  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:07:33.796997  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:07:33.797015  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:07:33.797023  401591 main.go:141] libmachine: (ha-628553-m02) Calling .PreCreateCheck
	I1007 12:07:33.797222  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:33.797700  401591 main.go:141] libmachine: Creating machine...
	I1007 12:07:33.797714  401591 main.go:141] libmachine: (ha-628553-m02) Calling .Create
	I1007 12:07:33.797891  401591 main.go:141] libmachine: (ha-628553-m02) Creating KVM machine...
	I1007 12:07:33.799094  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found existing default KVM network
	I1007 12:07:33.799243  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found existing private KVM network mk-ha-628553
	I1007 12:07:33.799364  401591 main.go:141] libmachine: (ha-628553-m02) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 ...
	I1007 12:07:33.799377  401591 main.go:141] libmachine: (ha-628553-m02) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:07:33.799477  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:33.799367  401944 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:07:33.799603  401591 main.go:141] libmachine: (ha-628553-m02) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:07:34.069404  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.069235  401944 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa...
	I1007 12:07:34.176325  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.176157  401944 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/ha-628553-m02.rawdisk...
	I1007 12:07:34.176359  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Writing magic tar header
	I1007 12:07:34.176372  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Writing SSH key tar header
	I1007 12:07:34.176384  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:34.176303  401944 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 ...
	I1007 12:07:34.176398  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02
	I1007 12:07:34.176501  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:07:34.176544  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:07:34.176555  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02 (perms=drwx------)
	I1007 12:07:34.176567  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:07:34.176576  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:07:34.176583  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:07:34.176594  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:07:34.176609  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:07:34.176622  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:07:34.176635  401591 main.go:141] libmachine: (ha-628553-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:07:34.176651  401591 main.go:141] libmachine: (ha-628553-m02) Creating domain...
	I1007 12:07:34.176660  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:07:34.176668  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Checking permissions on dir: /home
	I1007 12:07:34.176675  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Skipping /home - not owner
	I1007 12:07:34.177701  401591 main.go:141] libmachine: (ha-628553-m02) define libvirt domain using xml: 
	I1007 12:07:34.177730  401591 main.go:141] libmachine: (ha-628553-m02) <domain type='kvm'>
	I1007 12:07:34.177740  401591 main.go:141] libmachine: (ha-628553-m02)   <name>ha-628553-m02</name>
	I1007 12:07:34.177751  401591 main.go:141] libmachine: (ha-628553-m02)   <memory unit='MiB'>2200</memory>
	I1007 12:07:34.177759  401591 main.go:141] libmachine: (ha-628553-m02)   <vcpu>2</vcpu>
	I1007 12:07:34.177766  401591 main.go:141] libmachine: (ha-628553-m02)   <features>
	I1007 12:07:34.177777  401591 main.go:141] libmachine: (ha-628553-m02)     <acpi/>
	I1007 12:07:34.177786  401591 main.go:141] libmachine: (ha-628553-m02)     <apic/>
	I1007 12:07:34.177796  401591 main.go:141] libmachine: (ha-628553-m02)     <pae/>
	I1007 12:07:34.177809  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.177820  401591 main.go:141] libmachine: (ha-628553-m02)   </features>
	I1007 12:07:34.177834  401591 main.go:141] libmachine: (ha-628553-m02)   <cpu mode='host-passthrough'>
	I1007 12:07:34.177844  401591 main.go:141] libmachine: (ha-628553-m02)   
	I1007 12:07:34.177853  401591 main.go:141] libmachine: (ha-628553-m02)   </cpu>
	I1007 12:07:34.177864  401591 main.go:141] libmachine: (ha-628553-m02)   <os>
	I1007 12:07:34.177870  401591 main.go:141] libmachine: (ha-628553-m02)     <type>hvm</type>
	I1007 12:07:34.177876  401591 main.go:141] libmachine: (ha-628553-m02)     <boot dev='cdrom'/>
	I1007 12:07:34.177883  401591 main.go:141] libmachine: (ha-628553-m02)     <boot dev='hd'/>
	I1007 12:07:34.177888  401591 main.go:141] libmachine: (ha-628553-m02)     <bootmenu enable='no'/>
	I1007 12:07:34.177895  401591 main.go:141] libmachine: (ha-628553-m02)   </os>
	I1007 12:07:34.177900  401591 main.go:141] libmachine: (ha-628553-m02)   <devices>
	I1007 12:07:34.177910  401591 main.go:141] libmachine: (ha-628553-m02)     <disk type='file' device='cdrom'>
	I1007 12:07:34.177952  401591 main.go:141] libmachine: (ha-628553-m02)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/boot2docker.iso'/>
	I1007 12:07:34.177981  401591 main.go:141] libmachine: (ha-628553-m02)       <target dev='hdc' bus='scsi'/>
	I1007 12:07:34.177992  401591 main.go:141] libmachine: (ha-628553-m02)       <readonly/>
	I1007 12:07:34.178002  401591 main.go:141] libmachine: (ha-628553-m02)     </disk>
	I1007 12:07:34.178015  401591 main.go:141] libmachine: (ha-628553-m02)     <disk type='file' device='disk'>
	I1007 12:07:34.178028  401591 main.go:141] libmachine: (ha-628553-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:07:34.178044  401591 main.go:141] libmachine: (ha-628553-m02)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/ha-628553-m02.rawdisk'/>
	I1007 12:07:34.178055  401591 main.go:141] libmachine: (ha-628553-m02)       <target dev='hda' bus='virtio'/>
	I1007 12:07:34.178066  401591 main.go:141] libmachine: (ha-628553-m02)     </disk>
	I1007 12:07:34.178073  401591 main.go:141] libmachine: (ha-628553-m02)     <interface type='network'>
	I1007 12:07:34.178085  401591 main.go:141] libmachine: (ha-628553-m02)       <source network='mk-ha-628553'/>
	I1007 12:07:34.178102  401591 main.go:141] libmachine: (ha-628553-m02)       <model type='virtio'/>
	I1007 12:07:34.178114  401591 main.go:141] libmachine: (ha-628553-m02)     </interface>
	I1007 12:07:34.178125  401591 main.go:141] libmachine: (ha-628553-m02)     <interface type='network'>
	I1007 12:07:34.178138  401591 main.go:141] libmachine: (ha-628553-m02)       <source network='default'/>
	I1007 12:07:34.178148  401591 main.go:141] libmachine: (ha-628553-m02)       <model type='virtio'/>
	I1007 12:07:34.178157  401591 main.go:141] libmachine: (ha-628553-m02)     </interface>
	I1007 12:07:34.178172  401591 main.go:141] libmachine: (ha-628553-m02)     <serial type='pty'>
	I1007 12:07:34.178184  401591 main.go:141] libmachine: (ha-628553-m02)       <target port='0'/>
	I1007 12:07:34.178191  401591 main.go:141] libmachine: (ha-628553-m02)     </serial>
	I1007 12:07:34.178201  401591 main.go:141] libmachine: (ha-628553-m02)     <console type='pty'>
	I1007 12:07:34.178212  401591 main.go:141] libmachine: (ha-628553-m02)       <target type='serial' port='0'/>
	I1007 12:07:34.178223  401591 main.go:141] libmachine: (ha-628553-m02)     </console>
	I1007 12:07:34.178233  401591 main.go:141] libmachine: (ha-628553-m02)     <rng model='virtio'>
	I1007 12:07:34.178266  401591 main.go:141] libmachine: (ha-628553-m02)       <backend model='random'>/dev/random</backend>
	I1007 12:07:34.178292  401591 main.go:141] libmachine: (ha-628553-m02)     </rng>
	I1007 12:07:34.178303  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.178316  401591 main.go:141] libmachine: (ha-628553-m02)     
	I1007 12:07:34.178324  401591 main.go:141] libmachine: (ha-628553-m02)   </devices>
	I1007 12:07:34.178331  401591 main.go:141] libmachine: (ha-628553-m02) </domain>
	I1007 12:07:34.178342  401591 main.go:141] libmachine: (ha-628553-m02) 
	I1007 12:07:34.185967  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:33:2a:81 in network default
	I1007 12:07:34.186520  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring networks are active...
	I1007 12:07:34.186550  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:34.187255  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring network default is active
	I1007 12:07:34.187562  401591 main.go:141] libmachine: (ha-628553-m02) Ensuring network mk-ha-628553 is active
	I1007 12:07:34.187923  401591 main.go:141] libmachine: (ha-628553-m02) Getting domain xml...
	I1007 12:07:34.188741  401591 main.go:141] libmachine: (ha-628553-m02) Creating domain...
	I1007 12:07:35.460306  401591 main.go:141] libmachine: (ha-628553-m02) Waiting to get IP...
	I1007 12:07:35.461270  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.461715  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.461750  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.461693  401944 retry.go:31] will retry after 211.598538ms: waiting for machine to come up
	I1007 12:07:35.675347  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.675895  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.675927  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.675805  401944 retry.go:31] will retry after 296.849ms: waiting for machine to come up
	I1007 12:07:35.974395  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:35.974893  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:35.974954  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:35.974854  401944 retry.go:31] will retry after 388.404149ms: waiting for machine to come up
	I1007 12:07:36.365448  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:36.366155  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:36.366184  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:36.366075  401944 retry.go:31] will retry after 534.318698ms: waiting for machine to come up
	I1007 12:07:36.901907  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:36.902475  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:36.902512  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:36.902413  401944 retry.go:31] will retry after 649.263788ms: waiting for machine to come up
	I1007 12:07:37.553345  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:37.553872  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:37.553898  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:37.553792  401944 retry.go:31] will retry after 939.159086ms: waiting for machine to come up
	I1007 12:07:38.495133  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:38.495757  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:38.495785  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:38.495703  401944 retry.go:31] will retry after 913.128072ms: waiting for machine to come up
	I1007 12:07:39.410208  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:39.410778  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:39.410847  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:39.410734  401944 retry.go:31] will retry after 1.275296837s: waiting for machine to come up
	I1007 12:07:40.688215  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:40.688737  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:40.688763  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:40.688692  401944 retry.go:31] will retry after 1.706568868s: waiting for machine to come up
	I1007 12:07:42.397331  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:42.398210  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:42.398242  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:42.398140  401944 retry.go:31] will retry after 2.035219193s: waiting for machine to come up
	I1007 12:07:44.435063  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:44.435558  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:44.435604  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:44.435541  401944 retry.go:31] will retry after 2.129313504s: waiting for machine to come up
	I1007 12:07:46.567866  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:46.568337  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:46.568363  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:46.568294  401944 retry.go:31] will retry after 2.900138556s: waiting for machine to come up
	I1007 12:07:49.470446  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:49.470835  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:49.470861  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:49.470787  401944 retry.go:31] will retry after 2.802723119s: waiting for machine to come up
	I1007 12:07:52.276755  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:52.277120  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:07:52.277151  401591 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:07:52.277100  401944 retry.go:31] will retry after 4.815030442s: waiting for machine to come up
	I1007 12:07:57.095944  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.096384  401591 main.go:141] libmachine: (ha-628553-m02) Found IP for machine: 192.168.39.169
	I1007 12:07:57.096411  401591 main.go:141] libmachine: (ha-628553-m02) Reserving static IP address...
	I1007 12:07:57.096424  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has current primary IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.096805  401591 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find host DHCP lease matching {name: "ha-628553-m02", mac: "52:54:00:59:4a:2e", ip: "192.168.39.169"} in network mk-ha-628553
	I1007 12:07:57.173671  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Getting to WaitForSSH function...
	I1007 12:07:57.173707  401591 main.go:141] libmachine: (ha-628553-m02) Reserved static IP address: 192.168.39.169
	I1007 12:07:57.173721  401591 main.go:141] libmachine: (ha-628553-m02) Waiting for SSH to be available...
	I1007 12:07:57.176077  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.176414  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.176448  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.176591  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH client type: external
	I1007 12:07:57.176618  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa (-rw-------)
	I1007 12:07:57.176654  401591 main.go:141] libmachine: (ha-628553-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:07:57.176671  401591 main.go:141] libmachine: (ha-628553-m02) DBG | About to run SSH command:
	I1007 12:07:57.176683  401591 main.go:141] libmachine: (ha-628553-m02) DBG | exit 0
	I1007 12:07:57.299343  401591 main.go:141] libmachine: (ha-628553-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 12:07:57.299606  401591 main.go:141] libmachine: (ha-628553-m02) KVM machine creation complete!
	I1007 12:07:57.299951  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:57.300520  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:57.300733  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:57.300899  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:07:57.300909  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetState
	I1007 12:07:57.302247  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:07:57.302263  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:07:57.302270  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:07:57.302277  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.304689  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.305046  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.305083  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.305220  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.305416  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.305566  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.305687  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.305859  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.306075  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.306087  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:07:57.402628  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:57.402652  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:07:57.402660  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.405841  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.406213  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.406245  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.406443  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.406658  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.406871  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.407020  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.407143  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.407310  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.407320  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:07:57.503882  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:07:57.503964  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:07:57.503972  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:07:57.503980  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.504231  401591 buildroot.go:166] provisioning hostname "ha-628553-m02"
	I1007 12:07:57.504259  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.504487  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.507249  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.507577  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.507606  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.507742  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.507923  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.508054  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.508176  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.508480  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.508681  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.508694  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m02 && echo "ha-628553-m02" | sudo tee /etc/hostname
	I1007 12:07:57.622198  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m02
	
	I1007 12:07:57.622239  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.625084  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.625439  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.625478  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.625644  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:57.625837  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.626007  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:57.626130  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:57.626308  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:57.626503  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:57.626525  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:07:57.732566  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:07:57.732598  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:07:57.732622  401591 buildroot.go:174] setting up certificates
	I1007 12:07:57.732636  401591 provision.go:84] configureAuth start
	I1007 12:07:57.732649  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:07:57.732948  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:57.735493  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.735786  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.735817  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.735963  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:57.737975  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.738293  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:57.738318  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:57.738455  401591 provision.go:143] copyHostCerts
	I1007 12:07:57.738486  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:57.738525  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:07:57.738541  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:07:57.738610  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:07:57.738684  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:57.738703  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:07:57.738710  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:07:57.738733  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:07:57.738777  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:57.738793  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:07:57.738800  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:07:57.738820  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:07:57.738866  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m02 san=[127.0.0.1 192.168.39.169 ha-628553-m02 localhost minikube]
	I1007 12:07:58.143814  401591 provision.go:177] copyRemoteCerts
	I1007 12:07:58.143882  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:07:58.143910  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.147250  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.147700  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.147742  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.147869  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.148081  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.148224  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.148327  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.230179  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:07:58.230271  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:07:58.258288  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:07:58.258382  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:07:58.285135  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:07:58.285208  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:07:58.312621  401591 provision.go:87] duration metric: took 579.970325ms to configureAuth
	I1007 12:07:58.312652  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:07:58.312828  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:07:58.312907  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.315586  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.315959  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.315990  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.316222  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.316422  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.316601  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.316743  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.316927  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:58.317142  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:58.317161  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:07:58.545249  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:07:58.545278  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:07:58.545290  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetURL
	I1007 12:07:58.546702  401591 main.go:141] libmachine: (ha-628553-m02) DBG | Using libvirt version 6000000
	I1007 12:07:58.548842  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.549284  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.549317  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.549407  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:07:58.549418  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:07:58.549424  401591 client.go:171] duration metric: took 24.752608877s to LocalClient.Create
	I1007 12:07:58.549459  401591 start.go:167] duration metric: took 24.752691243s to libmachine.API.Create "ha-628553"
	I1007 12:07:58.549474  401591 start.go:293] postStartSetup for "ha-628553-m02" (driver="kvm2")
	I1007 12:07:58.549489  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:07:58.549507  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.549760  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:07:58.549786  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.551787  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.552071  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.552105  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.552239  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.552437  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.552667  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.552832  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.629949  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:07:58.634600  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:07:58.634633  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:07:58.634716  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:07:58.634820  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:07:58.634833  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:07:58.634948  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:07:58.644927  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:07:58.670613  401591 start.go:296] duration metric: took 121.120015ms for postStartSetup
	I1007 12:07:58.670687  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:07:58.671316  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:58.673738  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.674117  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.674143  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.674429  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:07:58.674687  401591 start.go:128] duration metric: took 24.897586771s to createHost
	I1007 12:07:58.674717  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.676881  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.677232  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.677261  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.677369  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.677545  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.677717  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.677844  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.677997  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:07:58.678177  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:07:58.678188  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:07:58.776120  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302878.748851389
	
	I1007 12:07:58.776147  401591 fix.go:216] guest clock: 1728302878.748851389
	I1007 12:07:58.776158  401591 fix.go:229] Guest: 2024-10-07 12:07:58.748851389 +0000 UTC Remote: 2024-10-07 12:07:58.674704612 +0000 UTC m=+72.466738357 (delta=74.146777ms)
	I1007 12:07:58.776181  401591 fix.go:200] guest clock delta is within tolerance: 74.146777ms
	I1007 12:07:58.776187  401591 start.go:83] releasing machines lock for "ha-628553-m02", held for 24.999226116s
	I1007 12:07:58.776211  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.776496  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:07:58.779145  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.779528  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.779560  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.782069  401591 out.go:177] * Found network options:
	I1007 12:07:58.783459  401591 out.go:177]   - NO_PROXY=192.168.39.110
	W1007 12:07:58.784861  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:07:58.784899  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785569  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785759  401591 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:07:58.785866  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:07:58.785905  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	W1007 12:07:58.785978  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:07:58.786070  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:07:58.786094  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:07:58.788699  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.788936  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789075  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.789100  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789286  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.789381  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:07:58.789402  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:07:58.789444  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.789536  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:07:58.789631  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.789706  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:07:58.789783  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:58.789824  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:07:58.789925  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:07:59.016879  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:07:59.023633  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:07:59.023710  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:07:59.041152  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:07:59.041183  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:07:59.041268  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:07:59.058168  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:07:59.074089  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:07:59.074153  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:07:59.089704  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:07:59.104808  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:07:59.234539  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:07:59.391501  401591 docker.go:233] disabling docker service ...
	I1007 12:07:59.391564  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:07:59.406313  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:07:59.420588  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:07:59.553910  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:07:59.664194  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:07:59.679241  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:07:59.699517  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:07:59.699594  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.710670  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:07:59.710739  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.721864  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.733897  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.746035  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:07:59.757811  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.769881  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.789700  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:07:59.800942  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:07:59.811016  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:07:59.811084  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:07:59.827337  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:07:59.838316  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:07:59.964123  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:08:00.067227  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:08:00.067310  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:08:00.073044  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:08:00.073120  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:08:00.077800  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:08:00.127300  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:08:00.127397  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:08:00.156941  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:08:00.190072  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:08:00.191853  401591 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:08:00.193177  401591 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:08:00.196263  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:08:00.196746  401591 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:08:00.196779  401591 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:08:00.196928  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:08:00.201903  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:00.215603  401591 mustload.go:65] Loading cluster: ha-628553
	I1007 12:08:00.215803  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:00.216063  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:00.216108  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:00.231500  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I1007 12:08:00.231984  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:00.232515  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:00.232538  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:00.232906  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:00.233117  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:08:00.234754  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:08:00.235153  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:00.235205  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:00.251119  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39693
	I1007 12:08:00.251713  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:00.252244  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:00.252269  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:00.252599  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:00.252779  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:08:00.252870  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.169
	I1007 12:08:00.252879  401591 certs.go:194] generating shared ca certs ...
	I1007 12:08:00.252902  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.253042  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:08:00.253085  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:08:00.253095  401591 certs.go:256] generating profile certs ...
	I1007 12:08:00.253179  401591 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:08:00.253210  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7
	I1007 12:08:00.253235  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.254]
	I1007 12:08:00.386276  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 ...
	I1007 12:08:00.386312  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7: {Name:mk3203e0eda21b3db6f2dd0a690d84683948f867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.386525  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7 ...
	I1007 12:08:00.386553  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7: {Name:mkfc3d62b17b51155465b7666879f42f7347e54c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:08:00.386666  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.c0623de7 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:08:00.386851  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.c0623de7 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:08:00.387056  401591 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:08:00.387074  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:08:00.387092  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:08:00.387112  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:08:00.387134  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:08:00.387151  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:08:00.387168  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:08:00.387184  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:08:00.387203  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:08:00.387277  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:08:00.387324  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:08:00.387338  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:08:00.387372  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:08:00.387402  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:08:00.387436  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:08:00.387492  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:08:00.387532  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:08:00.387560  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:08:00.387578  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:00.387630  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:08:00.391299  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:00.391779  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:08:00.391810  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:00.392002  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:08:00.392226  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:08:00.392412  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:08:00.392620  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:08:00.467476  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:08:00.476301  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:08:00.489016  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:08:00.494136  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:08:00.509194  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:08:00.513966  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:08:00.525972  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:08:00.530730  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:08:00.543099  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:08:00.548533  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:08:00.560887  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:08:00.565537  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:08:00.578649  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:08:00.607063  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:08:00.634228  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:08:00.660702  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:08:00.687010  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 12:08:00.713721  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:08:00.740934  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:08:00.768133  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:08:00.794572  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:08:00.820864  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:08:00.847539  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:08:00.876441  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:08:00.895435  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:08:00.913785  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:08:00.932908  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:08:00.951947  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:08:00.969974  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:08:00.988515  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:08:01.007600  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:08:01.014010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:08:01.025708  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.030507  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.030585  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:08:01.037094  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:08:01.049368  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:08:01.062454  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.067451  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.067538  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:08:01.073743  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:08:01.085386  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:08:01.096871  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.102352  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.102441  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:08:01.108559  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:08:01.120791  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:08:01.125796  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:08:01.125854  401591 kubeadm.go:934] updating node {m02 192.168.39.169 8443 v1.31.1 crio true true} ...
	I1007 12:08:01.125945  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:08:01.125972  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:08:01.126011  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:08:01.142927  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:08:01.143035  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:08:01.143100  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:08:01.154825  401591 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:08:01.154901  401591 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:08:01.166246  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:08:01.166280  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:08:01.166330  401591 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1007 12:08:01.166350  401591 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1007 12:08:01.166352  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:08:01.171889  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:08:01.171923  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:08:01.865609  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:08:01.865701  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:08:01.871954  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:08:01.872006  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:08:01.960218  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:08:02.002318  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:08:02.002440  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:08:02.020653  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:08:02.020697  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:08:02.500270  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:08:02.510702  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:08:02.529075  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:08:02.546750  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:08:02.565165  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:08:02.569362  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:08:02.582612  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:02.707124  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:02.725325  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:08:02.725700  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:02.725750  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:02.741913  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45599
	I1007 12:08:02.742441  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:02.742930  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:02.742953  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:02.743338  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:02.743547  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:08:02.743717  401591 start.go:317] joinCluster: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:08:02.743844  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:08:02.743869  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:08:02.747217  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:02.747665  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:08:02.747694  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:08:02.747872  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:08:02.748048  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:08:02.748193  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:08:02.748311  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:08:02.893504  401591 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:02.893569  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xsg4ou.msqa1mnarg4j4fst --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m02 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443"
	I1007 12:08:24.411215  401591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xsg4ou.msqa1mnarg4j4fst --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m02 --control-plane --apiserver-advertise-address=192.168.39.169 --apiserver-bind-port=8443": (21.517602331s)
	I1007 12:08:24.411250  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:08:24.991460  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553-m02 minikube.k8s.io/updated_at=2024_10_07T12_08_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=false
	I1007 12:08:25.149659  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-628553-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:08:25.289097  401591 start.go:319] duration metric: took 22.545377397s to joinCluster
	I1007 12:08:25.289200  401591 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:25.289529  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:25.291070  401591 out.go:177] * Verifying Kubernetes components...
	I1007 12:08:25.292571  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:08:25.564988  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:08:25.614504  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:08:25.614869  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:08:25.614979  401591 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:08:25.615327  401591 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:08:25.615461  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:25.615476  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:25.615490  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:25.615502  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:25.627711  401591 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1007 12:08:26.115662  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:26.115688  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:26.115696  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:26.115700  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:26.119790  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:26.615649  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:26.615673  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:26.615681  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:26.615685  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:26.619911  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.115994  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:27.116020  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:27.116029  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:27.116032  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:27.120154  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.616200  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:27.616222  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:27.616230  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:27.616234  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:27.620627  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:27.621267  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:28.116293  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:28.116321  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:28.116331  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:28.116337  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:28.121199  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:28.616216  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:28.616252  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:28.616260  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:28.616275  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:28.624618  401591 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:08:29.116125  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:29.116148  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:29.116156  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:29.116161  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:29.143192  401591 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1007 12:08:29.616218  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:29.616252  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:29.616260  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:29.616263  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:29.621645  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:29.622758  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:30.116377  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:30.116414  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:30.116434  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:30.116442  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:30.120276  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:30.616264  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:30.616289  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:30.616298  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:30.616302  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:30.619656  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:31.115662  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:31.115686  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:31.115695  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:31.115698  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:31.120037  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:31.616077  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:31.616103  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:31.616112  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:31.616119  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.027207  401591 round_trippers.go:574] Response Status: 200 OK in 411 milliseconds
	I1007 12:08:32.028035  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:32.116023  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:32.116049  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:32.116061  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:32.116066  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.123800  401591 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:08:32.615910  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:32.615936  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:32.615945  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:32.615949  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:32.619848  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:33.115622  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:33.115645  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:33.115652  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:33.115657  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:33.119744  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:33.616336  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:33.616363  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:33.616372  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:33.616378  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:33.620139  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:34.116322  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:34.116357  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:34.116368  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:34.116374  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:34.119958  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:34.120614  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:34.615645  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:34.615672  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:34.615682  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:34.615687  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:34.619017  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:35.115922  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:35.115951  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:35.115965  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:35.115969  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:35.119735  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:35.615551  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:35.615578  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:35.615589  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:35.615595  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:35.619854  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:36.115806  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:36.115830  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:36.115839  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:36.115842  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:36.119509  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:36.616590  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:36.616626  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:36.616638  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:36.616646  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:36.620711  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:36.621977  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:37.116201  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:37.116229  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:37.116237  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:37.116241  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:37.119861  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:37.615763  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:37.615789  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:37.615798  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:37.615801  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:37.619542  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:38.116230  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:38.116254  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:38.116262  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:38.116266  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:38.119599  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:38.616300  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:38.616327  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:38.616336  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:38.616340  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:38.622637  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:08:38.623148  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:39.116056  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:39.116089  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:39.116102  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:39.116108  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:39.119313  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:39.615634  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:39.615660  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:39.615668  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:39.615672  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:39.619449  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:40.116288  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:40.116318  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:40.116330  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:40.116337  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:40.120596  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:40.615608  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:40.615636  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:40.615645  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:40.615650  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:40.619654  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:41.115684  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:41.115712  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:41.115723  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:41.115729  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:41.119362  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:41.119941  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:41.616052  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:41.616080  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:41.616092  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:41.616099  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:41.621355  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:42.116153  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:42.116179  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:42.116190  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:42.116195  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:42.119158  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:42.615813  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:42.615838  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:42.615849  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:42.615856  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:42.619479  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.116150  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.116183  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.116193  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.116197  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.119726  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.120412  401591 node_ready.go:53] node "ha-628553-m02" has status "Ready":"False"
	I1007 12:08:43.615803  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.615825  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.615833  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.615837  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.619282  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.619820  401591 node_ready.go:49] node "ha-628553-m02" has status "Ready":"True"
	I1007 12:08:43.619840  401591 node_ready.go:38] duration metric: took 18.00448517s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:08:43.619850  401591 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:43.619942  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:43.619953  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.619962  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.619968  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.625430  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:43.631358  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.631464  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:08:43.631473  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.631481  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.631485  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.634796  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.635822  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.635842  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.635852  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.635858  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.638589  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.639211  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.639241  401591 pod_ready.go:82] duration metric: took 7.850216ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.639256  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.639336  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:08:43.639349  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.639360  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.639367  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.642168  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.642861  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.642879  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.642885  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.642891  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.645645  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.646131  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.646152  401591 pod_ready.go:82] duration metric: took 6.888201ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.646164  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.646225  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:08:43.646233  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.646240  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.646244  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.649034  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.649700  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:43.649718  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.649726  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.649731  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.652932  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.653474  401591 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.653494  401591 pod_ready.go:82] duration metric: took 7.324392ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.653506  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.653570  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:08:43.653578  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.653585  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.653589  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.656625  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:43.657314  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:43.657332  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.657340  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.657344  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.659929  401591 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:08:43.660411  401591 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:43.660431  401591 pod_ready.go:82] duration metric: took 6.918652ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.660446  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:43.816876  401591 request.go:632] Waited for 156.326759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:08:43.816939  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:08:43.816943  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:43.816951  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:43.816956  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:43.820806  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.015988  401591 request.go:632] Waited for 194.312012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.016073  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.016081  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.016091  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.016121  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.019609  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.020136  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.020158  401591 pod_ready.go:82] duration metric: took 359.705878ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.020169  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.216359  401591 request.go:632] Waited for 196.109348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:08:44.216441  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:08:44.216449  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.216460  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.216468  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.222633  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:08:44.416891  401591 request.go:632] Waited for 193.411987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:44.416975  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:44.416983  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.416993  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.416999  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.420954  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.421562  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.421582  401591 pod_ready.go:82] duration metric: took 401.406583ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.421592  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.616625  401591 request.go:632] Waited for 194.940502ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:08:44.616688  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:08:44.616693  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.616701  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.616707  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.620706  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.815865  401591 request.go:632] Waited for 194.348456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.815947  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:44.815954  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:44.815966  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:44.815972  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:44.819923  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:44.820749  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:44.820767  401591 pod_ready.go:82] duration metric: took 399.169132ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:44.820778  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.015880  401591 request.go:632] Waited for 195.028084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:08:45.015978  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:08:45.015983  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.015991  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.015997  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.020421  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.216616  401591 request.go:632] Waited for 195.391964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:45.216689  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:45.216696  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.216707  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.216712  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.221024  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.221697  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:45.221728  401591 pod_ready.go:82] duration metric: took 400.942386ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.221743  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.416754  401591 request.go:632] Waited for 194.909444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:08:45.416821  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:08:45.416834  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.416842  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.416848  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.421020  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.616294  401591 request.go:632] Waited for 194.468244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:45.616378  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:45.616387  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.616399  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.616406  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.620542  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:45.621474  401591 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:45.621500  401591 pod_ready.go:82] duration metric: took 399.748616ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.621515  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:45.816631  401591 request.go:632] Waited for 195.03231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:08:45.816699  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:08:45.816705  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:45.816713  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:45.816718  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:45.820607  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:46.016805  401591 request.go:632] Waited for 195.41966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.016911  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.016918  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.016926  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.016930  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.021351  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.021889  401591 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.021914  401591 pod_ready.go:82] duration metric: took 400.391171ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.021926  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.215992  401591 request.go:632] Waited for 193.955382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:08:46.216085  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:08:46.216092  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.216102  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.216108  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.219547  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:46.416084  401591 request.go:632] Waited for 195.950012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:46.416159  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:08:46.416167  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.416179  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.416198  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.420356  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.420972  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.420993  401591 pod_ready.go:82] duration metric: took 399.057557ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.421005  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.616254  401591 request.go:632] Waited for 195.135703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:08:46.616343  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:08:46.616355  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.616366  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.616375  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.625428  401591 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:08:46.816391  401591 request.go:632] Waited for 190.390972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.816468  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:08:46.816473  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.816482  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.816488  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.820601  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:46.821110  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:08:46.821133  401591 pod_ready.go:82] duration metric: took 400.121331ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:08:46.821145  401591 pod_ready.go:39] duration metric: took 3.201283112s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:08:46.821161  401591 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:08:46.821222  401591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:08:46.839291  401591 api_server.go:72] duration metric: took 21.550041864s to wait for apiserver process to appear ...
	I1007 12:08:46.839326  401591 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:08:46.839354  401591 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:08:46.845263  401591 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:08:46.845352  401591 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:08:46.845360  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:46.845369  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:46.845373  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:46.846772  401591 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1007 12:08:46.846883  401591 api_server.go:141] control plane version: v1.31.1
	I1007 12:08:46.846902  401591 api_server.go:131] duration metric: took 7.569264ms to wait for apiserver health ...
	I1007 12:08:46.846910  401591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:08:47.016224  401591 request.go:632] Waited for 169.208213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.016315  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.016324  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.016337  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.016348  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.021945  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:47.026191  401591 system_pods.go:59] 17 kube-system pods found
	I1007 12:08:47.026232  401591 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:08:47.026238  401591 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:08:47.026242  401591 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:08:47.026246  401591 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:08:47.026251  401591 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:08:47.026255  401591 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:08:47.026260  401591 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:08:47.026264  401591 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:08:47.026268  401591 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:08:47.026273  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:08:47.026276  401591 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:08:47.026279  401591 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:08:47.026282  401591 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:08:47.026285  401591 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:08:47.026288  401591 system_pods.go:61] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:08:47.026291  401591 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:08:47.026294  401591 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:08:47.026300  401591 system_pods.go:74] duration metric: took 179.385599ms to wait for pod list to return data ...
	I1007 12:08:47.026311  401591 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:08:47.216777  401591 request.go:632] Waited for 190.349118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:08:47.216844  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:08:47.216851  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.216861  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.216867  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.220501  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:08:47.220765  401591 default_sa.go:45] found service account: "default"
	I1007 12:08:47.220790  401591 default_sa.go:55] duration metric: took 194.471685ms for default service account to be created ...
	I1007 12:08:47.220803  401591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:08:47.416131  401591 request.go:632] Waited for 195.245207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.416207  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:08:47.416215  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.416224  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.416238  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.422085  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:08:47.426776  401591 system_pods.go:86] 17 kube-system pods found
	I1007 12:08:47.426812  401591 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:08:47.426820  401591 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:08:47.426826  401591 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:08:47.426832  401591 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:08:47.426837  401591 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:08:47.426842  401591 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:08:47.426848  401591 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:08:47.426853  401591 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:08:47.426858  401591 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:08:47.426863  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:08:47.426868  401591 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:08:47.426873  401591 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:08:47.426881  401591 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:08:47.426887  401591 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:08:47.426892  401591 system_pods.go:89] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:08:47.426898  401591 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:08:47.426907  401591 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:08:47.426918  401591 system_pods.go:126] duration metric: took 206.105758ms to wait for k8s-apps to be running ...
	I1007 12:08:47.426931  401591 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:08:47.427006  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:08:47.444273  401591 system_svc.go:56] duration metric: took 17.328443ms WaitForService to wait for kubelet
	I1007 12:08:47.444313  401591 kubeadm.go:582] duration metric: took 22.155070744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:08:47.444339  401591 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:08:47.616864  401591 request.go:632] Waited for 172.422315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:08:47.616938  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:08:47.616945  401591 round_trippers.go:469] Request Headers:
	I1007 12:08:47.616961  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:08:47.616969  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:08:47.621972  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:08:47.622888  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:08:47.622919  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:08:47.622945  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:08:47.622950  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:08:47.622955  401591 node_conditions.go:105] duration metric: took 178.610758ms to run NodePressure ...
	I1007 12:08:47.622983  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:08:47.623014  401591 start.go:255] writing updated cluster config ...
	I1007 12:08:47.625468  401591 out.go:201] 
	I1007 12:08:47.627200  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:08:47.627328  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:08:47.629319  401591 out.go:177] * Starting "ha-628553-m03" control-plane node in "ha-628553" cluster
	I1007 12:08:47.630767  401591 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:08:47.630807  401591 cache.go:56] Caching tarball of preloaded images
	I1007 12:08:47.630955  401591 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:08:47.630986  401591 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:08:47.631145  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:08:47.631383  401591 start.go:360] acquireMachinesLock for ha-628553-m03: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:08:47.631439  401591 start.go:364] duration metric: took 32.151µs to acquireMachinesLock for "ha-628553-m03"
	I1007 12:08:47.631463  401591 start.go:93] Provisioning new machine with config: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspekt
or-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:08:47.631573  401591 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1007 12:08:47.633396  401591 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 12:08:47.633527  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:08:47.633570  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:08:47.650117  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I1007 12:08:47.650636  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:08:47.651158  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:08:47.651181  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:08:47.651622  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:08:47.651783  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:08:47.651941  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:08:47.652092  401591 start.go:159] libmachine.API.Create for "ha-628553" (driver="kvm2")
	I1007 12:08:47.652123  401591 client.go:168] LocalClient.Create starting
	I1007 12:08:47.652165  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 12:08:47.652208  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:08:47.652231  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:08:47.652328  401591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 12:08:47.652361  401591 main.go:141] libmachine: Decoding PEM data...
	I1007 12:08:47.652377  401591 main.go:141] libmachine: Parsing certificate...
	I1007 12:08:47.652400  401591 main.go:141] libmachine: Running pre-create checks...
	I1007 12:08:47.652412  401591 main.go:141] libmachine: (ha-628553-m03) Calling .PreCreateCheck
	I1007 12:08:47.652572  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:08:47.652989  401591 main.go:141] libmachine: Creating machine...
	I1007 12:08:47.653006  401591 main.go:141] libmachine: (ha-628553-m03) Calling .Create
	I1007 12:08:47.653161  401591 main.go:141] libmachine: (ha-628553-m03) Creating KVM machine...
	I1007 12:08:47.654461  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found existing default KVM network
	I1007 12:08:47.654504  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found existing private KVM network mk-ha-628553
	I1007 12:08:47.654721  401591 main.go:141] libmachine: (ha-628553-m03) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 ...
	I1007 12:08:47.654751  401591 main.go:141] libmachine: (ha-628553-m03) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 12:08:47.654817  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:47.654705  402350 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:08:47.654927  401591 main.go:141] libmachine: (ha-628553-m03) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 12:08:47.943561  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:47.943397  402350 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa...
	I1007 12:08:48.157872  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:48.157710  402350 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/ha-628553-m03.rawdisk...
	I1007 12:08:48.157916  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Writing magic tar header
	I1007 12:08:48.157932  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Writing SSH key tar header
	I1007 12:08:48.157944  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:48.157825  402350 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 ...
	I1007 12:08:48.157970  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03
	I1007 12:08:48.158063  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03 (perms=drwx------)
	I1007 12:08:48.158107  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 12:08:48.158121  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 12:08:48.158141  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:08:48.158150  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 12:08:48.158232  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 12:08:48.158257  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 12:08:48.158266  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 12:08:48.158280  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 12:08:48.158289  401591 main.go:141] libmachine: (ha-628553-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 12:08:48.158307  401591 main.go:141] libmachine: (ha-628553-m03) Creating domain...
	I1007 12:08:48.158321  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home/jenkins
	I1007 12:08:48.158335  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Checking permissions on dir: /home
	I1007 12:08:48.158350  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Skipping /home - not owner
	I1007 12:08:48.159295  401591 main.go:141] libmachine: (ha-628553-m03) define libvirt domain using xml: 
	I1007 12:08:48.159314  401591 main.go:141] libmachine: (ha-628553-m03) <domain type='kvm'>
	I1007 12:08:48.159321  401591 main.go:141] libmachine: (ha-628553-m03)   <name>ha-628553-m03</name>
	I1007 12:08:48.159327  401591 main.go:141] libmachine: (ha-628553-m03)   <memory unit='MiB'>2200</memory>
	I1007 12:08:48.159361  401591 main.go:141] libmachine: (ha-628553-m03)   <vcpu>2</vcpu>
	I1007 12:08:48.159380  401591 main.go:141] libmachine: (ha-628553-m03)   <features>
	I1007 12:08:48.159389  401591 main.go:141] libmachine: (ha-628553-m03)     <acpi/>
	I1007 12:08:48.159398  401591 main.go:141] libmachine: (ha-628553-m03)     <apic/>
	I1007 12:08:48.159406  401591 main.go:141] libmachine: (ha-628553-m03)     <pae/>
	I1007 12:08:48.159416  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159423  401591 main.go:141] libmachine: (ha-628553-m03)   </features>
	I1007 12:08:48.159430  401591 main.go:141] libmachine: (ha-628553-m03)   <cpu mode='host-passthrough'>
	I1007 12:08:48.159437  401591 main.go:141] libmachine: (ha-628553-m03)   
	I1007 12:08:48.159446  401591 main.go:141] libmachine: (ha-628553-m03)   </cpu>
	I1007 12:08:48.159455  401591 main.go:141] libmachine: (ha-628553-m03)   <os>
	I1007 12:08:48.159465  401591 main.go:141] libmachine: (ha-628553-m03)     <type>hvm</type>
	I1007 12:08:48.159477  401591 main.go:141] libmachine: (ha-628553-m03)     <boot dev='cdrom'/>
	I1007 12:08:48.159488  401591 main.go:141] libmachine: (ha-628553-m03)     <boot dev='hd'/>
	I1007 12:08:48.159499  401591 main.go:141] libmachine: (ha-628553-m03)     <bootmenu enable='no'/>
	I1007 12:08:48.159508  401591 main.go:141] libmachine: (ha-628553-m03)   </os>
	I1007 12:08:48.159518  401591 main.go:141] libmachine: (ha-628553-m03)   <devices>
	I1007 12:08:48.159527  401591 main.go:141] libmachine: (ha-628553-m03)     <disk type='file' device='cdrom'>
	I1007 12:08:48.159543  401591 main.go:141] libmachine: (ha-628553-m03)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/boot2docker.iso'/>
	I1007 12:08:48.159554  401591 main.go:141] libmachine: (ha-628553-m03)       <target dev='hdc' bus='scsi'/>
	I1007 12:08:48.159561  401591 main.go:141] libmachine: (ha-628553-m03)       <readonly/>
	I1007 12:08:48.159571  401591 main.go:141] libmachine: (ha-628553-m03)     </disk>
	I1007 12:08:48.159579  401591 main.go:141] libmachine: (ha-628553-m03)     <disk type='file' device='disk'>
	I1007 12:08:48.159596  401591 main.go:141] libmachine: (ha-628553-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 12:08:48.159611  401591 main.go:141] libmachine: (ha-628553-m03)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/ha-628553-m03.rawdisk'/>
	I1007 12:08:48.159621  401591 main.go:141] libmachine: (ha-628553-m03)       <target dev='hda' bus='virtio'/>
	I1007 12:08:48.159629  401591 main.go:141] libmachine: (ha-628553-m03)     </disk>
	I1007 12:08:48.159639  401591 main.go:141] libmachine: (ha-628553-m03)     <interface type='network'>
	I1007 12:08:48.159647  401591 main.go:141] libmachine: (ha-628553-m03)       <source network='mk-ha-628553'/>
	I1007 12:08:48.159659  401591 main.go:141] libmachine: (ha-628553-m03)       <model type='virtio'/>
	I1007 12:08:48.159667  401591 main.go:141] libmachine: (ha-628553-m03)     </interface>
	I1007 12:08:48.159677  401591 main.go:141] libmachine: (ha-628553-m03)     <interface type='network'>
	I1007 12:08:48.159685  401591 main.go:141] libmachine: (ha-628553-m03)       <source network='default'/>
	I1007 12:08:48.159695  401591 main.go:141] libmachine: (ha-628553-m03)       <model type='virtio'/>
	I1007 12:08:48.159702  401591 main.go:141] libmachine: (ha-628553-m03)     </interface>
	I1007 12:08:48.159711  401591 main.go:141] libmachine: (ha-628553-m03)     <serial type='pty'>
	I1007 12:08:48.159722  401591 main.go:141] libmachine: (ha-628553-m03)       <target port='0'/>
	I1007 12:08:48.159732  401591 main.go:141] libmachine: (ha-628553-m03)     </serial>
	I1007 12:08:48.159741  401591 main.go:141] libmachine: (ha-628553-m03)     <console type='pty'>
	I1007 12:08:48.159751  401591 main.go:141] libmachine: (ha-628553-m03)       <target type='serial' port='0'/>
	I1007 12:08:48.159759  401591 main.go:141] libmachine: (ha-628553-m03)     </console>
	I1007 12:08:48.159769  401591 main.go:141] libmachine: (ha-628553-m03)     <rng model='virtio'>
	I1007 12:08:48.159779  401591 main.go:141] libmachine: (ha-628553-m03)       <backend model='random'>/dev/random</backend>
	I1007 12:08:48.159786  401591 main.go:141] libmachine: (ha-628553-m03)     </rng>
	I1007 12:08:48.159791  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159796  401591 main.go:141] libmachine: (ha-628553-m03)     
	I1007 12:08:48.159801  401591 main.go:141] libmachine: (ha-628553-m03)   </devices>
	I1007 12:08:48.159807  401591 main.go:141] libmachine: (ha-628553-m03) </domain>
	I1007 12:08:48.159814  401591 main.go:141] libmachine: (ha-628553-m03) 
	I1007 12:08:48.167454  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:19:9b:6c in network default
	I1007 12:08:48.168104  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring networks are active...
	I1007 12:08:48.168135  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:48.168903  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring network default is active
	I1007 12:08:48.169240  401591 main.go:141] libmachine: (ha-628553-m03) Ensuring network mk-ha-628553 is active
	I1007 12:08:48.169699  401591 main.go:141] libmachine: (ha-628553-m03) Getting domain xml...
	I1007 12:08:48.170532  401591 main.go:141] libmachine: (ha-628553-m03) Creating domain...
	I1007 12:08:49.440366  401591 main.go:141] libmachine: (ha-628553-m03) Waiting to get IP...
	I1007 12:08:49.441248  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:49.441739  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:49.441772  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:49.441711  402350 retry.go:31] will retry after 304.052486ms: waiting for machine to come up
	I1007 12:08:49.747277  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:49.747963  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:49.747996  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:49.747904  402350 retry.go:31] will retry after 363.120796ms: waiting for machine to come up
	I1007 12:08:50.113364  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.113854  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.113886  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.113784  402350 retry.go:31] will retry after 318.214065ms: waiting for machine to come up
	I1007 12:08:50.434117  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.434742  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.434772  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.434669  402350 retry.go:31] will retry after 557.05591ms: waiting for machine to come up
	I1007 12:08:50.993368  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:50.993877  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:50.993902  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:50.993839  402350 retry.go:31] will retry after 534.862367ms: waiting for machine to come up
	I1007 12:08:51.530722  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:51.531299  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:51.531330  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:51.531236  402350 retry.go:31] will retry after 674.225428ms: waiting for machine to come up
	I1007 12:08:52.207219  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:52.207779  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:52.207805  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:52.207744  402350 retry.go:31] will retry after 750.38088ms: waiting for machine to come up
	I1007 12:08:52.959912  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:52.960419  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:52.960456  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:52.960375  402350 retry.go:31] will retry after 1.032745665s: waiting for machine to come up
	I1007 12:08:53.994776  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:53.995316  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:53.995345  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:53.995259  402350 retry.go:31] will retry after 1.174624993s: waiting for machine to come up
	I1007 12:08:55.171247  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:55.171687  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:55.171709  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:55.171640  402350 retry.go:31] will retry after 2.315279218s: waiting for machine to come up
	I1007 12:08:57.488351  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:57.488810  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:57.488838  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:57.488771  402350 retry.go:31] will retry after 1.769995019s: waiting for machine to come up
	I1007 12:08:59.260072  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:08:59.260605  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:08:59.260637  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:08:59.260547  402350 retry.go:31] will retry after 3.352254545s: waiting for machine to come up
	I1007 12:09:02.616362  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:02.616828  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:09:02.616850  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:09:02.616780  402350 retry.go:31] will retry after 4.496920566s: waiting for machine to come up
	I1007 12:09:07.118974  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:07.119565  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:09:07.119593  401591 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:09:07.119492  402350 retry.go:31] will retry after 4.132199874s: waiting for machine to come up
	I1007 12:09:11.256196  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.256790  401591 main.go:141] libmachine: (ha-628553-m03) Found IP for machine: 192.168.39.149
	I1007 12:09:11.256824  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has current primary IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.256833  401591 main.go:141] libmachine: (ha-628553-m03) Reserving static IP address...
	I1007 12:09:11.257175  401591 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find host DHCP lease matching {name: "ha-628553-m03", mac: "52:54:00:3c:9f:34", ip: "192.168.39.149"} in network mk-ha-628553
	I1007 12:09:11.338093  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Getting to WaitForSSH function...
	I1007 12:09:11.338124  401591 main.go:141] libmachine: (ha-628553-m03) Reserved static IP address: 192.168.39.149
	I1007 12:09:11.338139  401591 main.go:141] libmachine: (ha-628553-m03) Waiting for SSH to be available...
	I1007 12:09:11.341396  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.341892  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.341925  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.342105  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH client type: external
	I1007 12:09:11.342133  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa (-rw-------)
	I1007 12:09:11.342177  401591 main.go:141] libmachine: (ha-628553-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:09:11.342197  401591 main.go:141] libmachine: (ha-628553-m03) DBG | About to run SSH command:
	I1007 12:09:11.342214  401591 main.go:141] libmachine: (ha-628553-m03) DBG | exit 0
	I1007 12:09:11.471281  401591 main.go:141] libmachine: (ha-628553-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 12:09:11.471621  401591 main.go:141] libmachine: (ha-628553-m03) KVM machine creation complete!
	I1007 12:09:11.471952  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:09:11.472582  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:11.472840  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:11.473024  401591 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 12:09:11.473037  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetState
	I1007 12:09:11.474527  401591 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 12:09:11.474548  401591 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 12:09:11.474555  401591 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 12:09:11.474563  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.477303  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.477650  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.477666  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.477788  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.477993  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.478174  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.478306  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.478470  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.478702  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.478716  401591 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 12:09:11.587071  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:09:11.587095  401591 main.go:141] libmachine: Detecting the provisioner...
	I1007 12:09:11.587105  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.589883  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.590265  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.590295  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.590447  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.590647  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.590829  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.591025  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.591169  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.591356  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.591367  401591 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 12:09:11.704302  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 12:09:11.704403  401591 main.go:141] libmachine: found compatible host: buildroot
	I1007 12:09:11.704415  401591 main.go:141] libmachine: Provisioning with buildroot...
	I1007 12:09:11.704426  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.704723  401591 buildroot.go:166] provisioning hostname "ha-628553-m03"
	I1007 12:09:11.704750  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.704905  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.707646  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.708032  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.708062  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.708204  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.708466  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.708666  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.708795  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.708972  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.709229  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.709247  401591 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m03 && echo "ha-628553-m03" | sudo tee /etc/hostname
	I1007 12:09:11.834437  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m03
	
	I1007 12:09:11.834498  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.837609  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.837983  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.838013  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.838374  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:11.838612  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.838805  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:11.839005  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:11.839175  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:11.839394  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:11.839420  401591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:09:11.962733  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:09:11.962765  401591 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:09:11.962788  401591 buildroot.go:174] setting up certificates
	I1007 12:09:11.962801  401591 provision.go:84] configureAuth start
	I1007 12:09:11.962814  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:09:11.963127  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:11.965755  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.966166  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.966201  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.966379  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:11.968397  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.968678  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:11.968703  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:11.968812  401591 provision.go:143] copyHostCerts
	I1007 12:09:11.968847  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:09:11.968897  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:09:11.968910  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:09:11.968994  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:09:11.969133  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:09:11.969163  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:09:11.969173  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:09:11.969222  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:09:11.969301  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:09:11.969326  401591 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:09:11.969332  401591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:09:11.969367  401591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:09:11.969444  401591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m03 san=[127.0.0.1 192.168.39.149 ha-628553-m03 localhost minikube]
	I1007 12:09:12.008085  401591 provision.go:177] copyRemoteCerts
	I1007 12:09:12.008153  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:09:12.008198  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.011020  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.011447  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.011479  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.011639  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.011896  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.012077  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.012241  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.099103  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:09:12.099196  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:09:12.129470  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:09:12.129570  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:09:12.156229  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:09:12.156324  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:09:12.182409  401591 provision.go:87] duration metric: took 219.592268ms to configureAuth
	I1007 12:09:12.182440  401591 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:09:12.182689  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:12.182805  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.186445  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.186906  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.186942  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.187197  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.187409  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.187561  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.187701  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.187919  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:12.188176  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:12.188201  401591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:09:12.442162  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:09:12.442201  401591 main.go:141] libmachine: Checking connection to Docker...
	I1007 12:09:12.442252  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetURL
	I1007 12:09:12.443642  401591 main.go:141] libmachine: (ha-628553-m03) DBG | Using libvirt version 6000000
	I1007 12:09:12.445960  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.446454  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.446484  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.446704  401591 main.go:141] libmachine: Docker is up and running!
	I1007 12:09:12.446717  401591 main.go:141] libmachine: Reticulating splines...
	I1007 12:09:12.446724  401591 client.go:171] duration metric: took 24.794590297s to LocalClient.Create
	I1007 12:09:12.446748  401591 start.go:167] duration metric: took 24.794658821s to libmachine.API.Create "ha-628553"
	I1007 12:09:12.446758  401591 start.go:293] postStartSetup for "ha-628553-m03" (driver="kvm2")
	I1007 12:09:12.446768  401591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:09:12.446787  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.447044  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:09:12.447067  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.449182  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.449535  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.449578  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.449689  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.449866  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.450019  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.450128  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.538407  401591 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:09:12.543112  401591 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:09:12.543143  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:09:12.543238  401591 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:09:12.543327  401591 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:09:12.543349  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:09:12.543452  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:09:12.553965  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:09:12.580260  401591 start.go:296] duration metric: took 133.488077ms for postStartSetup
	I1007 12:09:12.580320  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:09:12.580945  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:12.583692  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.584096  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.584119  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.584577  401591 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:09:12.584810  401591 start.go:128] duration metric: took 24.953224798s to createHost
	I1007 12:09:12.584834  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.586899  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.587276  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.587304  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.587460  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.587666  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.587811  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.587989  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.588157  401591 main.go:141] libmachine: Using SSH client type: native
	I1007 12:09:12.588403  401591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:09:12.588416  401591 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:09:12.699909  401591 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728302952.675618146
	
	I1007 12:09:12.699944  401591 fix.go:216] guest clock: 1728302952.675618146
	I1007 12:09:12.699957  401591 fix.go:229] Guest: 2024-10-07 12:09:12.675618146 +0000 UTC Remote: 2024-10-07 12:09:12.584823089 +0000 UTC m=+146.376856843 (delta=90.795057ms)
	I1007 12:09:12.699983  401591 fix.go:200] guest clock delta is within tolerance: 90.795057ms
	I1007 12:09:12.700015  401591 start.go:83] releasing machines lock for "ha-628553-m03", held for 25.068545198s
	I1007 12:09:12.700046  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.700343  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:12.703273  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.703654  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.703685  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.706106  401591 out.go:177] * Found network options:
	I1007 12:09:12.707602  401591 out.go:177]   - NO_PROXY=192.168.39.110,192.168.39.169
	W1007 12:09:12.709074  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:09:12.709105  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:09:12.709125  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.709903  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.710157  401591 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:09:12.710281  401591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:09:12.710326  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	W1007 12:09:12.710331  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:09:12.710350  401591 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:09:12.710418  401591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:09:12.710435  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:09:12.713091  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713270  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713549  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.713577  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713688  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:12.713709  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:12.713890  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.713892  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:09:12.714094  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.714096  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:09:12.714290  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.714293  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:09:12.714448  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.714465  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:09:12.965758  401591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:09:12.972410  401591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:09:12.972510  401591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:09:12.991892  401591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:09:12.991924  401591 start.go:495] detecting cgroup driver to use...
	I1007 12:09:12.992029  401591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:09:13.011092  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:09:13.027119  401591 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:09:13.027197  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:09:13.043881  401591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:09:13.059996  401591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:09:13.194059  401591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:09:13.363286  401591 docker.go:233] disabling docker service ...
	I1007 12:09:13.363388  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:09:13.380238  401591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:09:13.395090  401591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:09:13.539822  401591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:09:13.684666  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:09:13.699806  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:09:13.721312  401591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:09:13.721394  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.734593  401591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:09:13.734678  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.746652  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.758752  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.770649  401591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:09:13.783579  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.796044  401591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.816090  401591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:09:13.829211  401591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:09:13.841584  401591 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:09:13.841652  401591 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:09:13.858346  401591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:09:13.870682  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:14.015562  401591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:09:14.112385  401591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:09:14.112472  401591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:09:14.117706  401591 start.go:563] Will wait 60s for crictl version
	I1007 12:09:14.117785  401591 ssh_runner.go:195] Run: which crictl
	I1007 12:09:14.121973  401591 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:09:14.164678  401591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:09:14.164778  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:09:14.195026  401591 ssh_runner.go:195] Run: crio --version
	I1007 12:09:14.228305  401591 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:09:14.229710  401591 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:09:14.230954  401591 out.go:177]   - env NO_PROXY=192.168.39.110,192.168.39.169
	I1007 12:09:14.232215  401591 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:09:14.235268  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:14.236414  401591 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:09:14.236455  401591 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:09:14.236834  401591 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:09:14.241615  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:09:14.255885  401591 mustload.go:65] Loading cluster: ha-628553
	I1007 12:09:14.256171  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:14.256468  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:14.256525  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:14.272191  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35203
	I1007 12:09:14.272704  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:14.273292  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:14.273317  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:14.273675  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:14.273860  401591 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:09:14.275739  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:09:14.276042  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:14.276078  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:14.291563  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34379
	I1007 12:09:14.291960  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:14.292503  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:14.292525  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:14.292841  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:14.293029  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:09:14.293266  401591 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.149
	I1007 12:09:14.293282  401591 certs.go:194] generating shared ca certs ...
	I1007 12:09:14.293298  401591 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.293454  401591 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:09:14.293500  401591 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:09:14.293518  401591 certs.go:256] generating profile certs ...
	I1007 12:09:14.293595  401591 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:09:14.293624  401591 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5
	I1007 12:09:14.293644  401591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.149 192.168.39.254]
	I1007 12:09:14.510662  401591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 ...
	I1007 12:09:14.510698  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5: {Name:mke401c308480be9f53e9bff701f2e9e4cf3af88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.510883  401591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5 ...
	I1007 12:09:14.510897  401591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5: {Name:mk6ef257f67983b566726de1c934d8565c12b533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:09:14.510988  401591 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.e74801e5 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:09:14.511123  401591 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:09:14.511263  401591 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:09:14.511281  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:09:14.511294  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:09:14.511306  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:09:14.511318  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:09:14.511328  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:09:14.511341  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:09:14.511350  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:09:14.551130  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:09:14.551306  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:09:14.551354  401591 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:09:14.551363  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:09:14.551385  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:09:14.551414  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:09:14.551453  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:09:14.551518  401591 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:09:14.551570  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:14.551588  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:09:14.551601  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:09:14.551640  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:09:14.554905  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:14.555423  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:09:14.555460  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:14.555653  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:09:14.555879  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:09:14.556052  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:09:14.556195  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:09:14.631352  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:09:14.636908  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:09:14.651074  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:09:14.656279  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:09:14.669909  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:09:14.674787  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:09:14.685770  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:09:14.690694  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:09:14.702721  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:09:14.707691  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:09:14.719165  401591 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:09:14.724048  401591 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:09:14.737169  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:09:14.766716  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:09:14.794736  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:09:14.821693  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:09:14.848771  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:09:14.877403  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:09:14.903816  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:09:14.930704  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:09:14.958763  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:09:14.986639  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:09:15.012198  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:09:15.040552  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:09:15.060843  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:09:15.079624  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:09:15.099559  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:09:15.119015  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:09:15.138902  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:09:15.157844  401591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:09:15.176996  401591 ssh_runner.go:195] Run: openssl version
	I1007 12:09:15.183306  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:09:15.195832  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.201336  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.201442  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:09:15.208010  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:09:15.220845  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:09:15.233290  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.238387  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.238463  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:09:15.245368  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:09:15.257699  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:09:15.270151  401591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.274983  401591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.275048  401591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:09:15.281100  401591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:09:15.293845  401591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:09:15.298173  401591 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:09:15.298242  401591 kubeadm.go:934] updating node {m03 192.168.39.149 8443 v1.31.1 crio true true} ...
	I1007 12:09:15.298356  401591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:09:15.298388  401591 kube-vip.go:115] generating kube-vip config ...
	I1007 12:09:15.298436  401591 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:09:15.316713  401591 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:09:15.316806  401591 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:09:15.316885  401591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:09:15.329178  401591 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1007 12:09:15.329260  401591 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1007 12:09:15.341535  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1007 12:09:15.341551  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1007 12:09:15.341569  401591 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1007 12:09:15.341576  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:09:15.341585  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:09:15.341597  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:09:15.341641  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1007 12:09:15.341660  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1007 12:09:15.361141  401591 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:09:15.361169  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1007 12:09:15.361188  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1007 12:09:15.361231  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1007 12:09:15.361273  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1007 12:09:15.361282  401591 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1007 12:09:15.386048  401591 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1007 12:09:15.386094  401591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1007 12:09:16.354010  401591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:09:16.365447  401591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:09:16.386247  401591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:09:16.405656  401591 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:09:16.424160  401591 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:09:16.428897  401591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:09:16.443784  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:16.576452  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:09:16.595070  401591 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:09:16.595602  401591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:09:16.595675  401591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:09:16.612706  401591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40581
	I1007 12:09:16.613341  401591 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:09:16.613998  401591 main.go:141] libmachine: Using API Version  1
	I1007 12:09:16.614030  401591 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:09:16.614425  401591 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:09:16.614648  401591 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:09:16.614817  401591 start.go:317] joinCluster: &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluste
rName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:
false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:09:16.615034  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1007 12:09:16.615063  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:09:16.618382  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:16.618897  401591 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:09:16.618931  401591 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:09:16.619128  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:09:16.619318  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:09:16.619512  401591 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:09:16.619676  401591 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:09:16.786244  401591 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:09:16.786300  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lajva.py7n2yqd96dw6gb3 --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m03 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443"
	I1007 12:09:40.133777  401591 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7lajva.py7n2yqd96dw6gb3 --discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-628553-m03 --control-plane --apiserver-advertise-address=192.168.39.149 --apiserver-bind-port=8443": (23.347442914s)
	I1007 12:09:40.133833  401591 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1007 12:09:40.642262  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-628553-m03 minikube.k8s.io/updated_at=2024_10_07T12_09_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=ha-628553 minikube.k8s.io/primary=false
	I1007 12:09:40.798800  401591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-628553-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1007 12:09:40.938486  401591 start.go:319] duration metric: took 24.323665443s to joinCluster
	I1007 12:09:40.938574  401591 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:09:40.938992  401591 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:09:40.939839  401591 out.go:177] * Verifying Kubernetes components...
	I1007 12:09:40.941073  401591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:09:41.179331  401591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:09:41.207454  401591 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:09:41.207837  401591 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:09:41.207937  401591 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:09:41.208281  401591 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m03" to be "Ready" ...
	I1007 12:09:41.208393  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:41.208405  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:41.208416  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:41.208425  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:41.212516  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:41.709058  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:41.709088  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:41.709105  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:41.709111  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:41.712889  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:42.209244  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:42.209270  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:42.209282  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:42.209291  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:42.215411  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:42.708822  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:42.708852  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:42.708859  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:42.708864  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:42.712350  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:43.208783  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:43.208814  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:43.208825  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:43.208830  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:43.212641  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:43.213313  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:43.708554  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:43.708586  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:43.708598  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:43.708603  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:43.712869  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:44.209341  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:44.209369  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:44.209378  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:44.209383  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:44.213843  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:44.708627  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:44.708655  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:44.708667  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:44.708674  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:44.712946  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:45.208740  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:45.208767  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:45.208780  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:45.208787  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:45.212825  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:45.213803  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:45.709194  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:45.709226  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:45.709239  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:45.709244  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:45.713036  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:46.209154  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:46.209181  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:46.209192  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:46.209196  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:46.212466  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:46.708677  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:46.708707  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:46.708716  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:46.708724  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:46.712340  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:47.208818  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:47.208842  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:47.208851  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:47.208857  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:47.212615  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:47.709164  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:47.709193  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:47.709202  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:47.709205  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:47.713234  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:47.713781  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:48.209498  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:48.209525  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:48.209534  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:48.209537  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:48.213755  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:48.708587  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:48.708611  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:48.708621  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:48.708624  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:48.712036  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:49.208568  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:49.208592  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:49.208603  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:49.208607  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:49.211903  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:49.708691  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:49.708716  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:49.708725  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:49.708729  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:49.712776  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:50.208877  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:50.208902  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:50.208911  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:50.208914  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:50.212493  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:50.213081  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:50.709538  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:50.709562  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:50.709571  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:50.709575  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:50.713279  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:51.209230  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:51.209256  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:51.209265  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:51.209268  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:51.213382  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:51.708830  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:51.708854  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:51.708862  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:51.708866  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:51.712240  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:52.208900  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:52.208926  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:52.208939  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:52.208946  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:52.215313  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:52.216003  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:52.708705  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:52.708730  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:52.708738  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:52.708742  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:52.712616  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:53.209443  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:53.209470  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:53.209480  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:53.209484  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:53.220542  401591 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:09:53.709519  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:53.709546  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:53.709558  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:53.709564  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:53.716163  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:09:54.208707  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:54.208734  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:54.208746  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:54.208760  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:54.213435  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:54.708587  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:54.708610  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:54.708619  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:54.708622  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:54.712056  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:54.712859  401591 node_ready.go:53] node "ha-628553-m03" has status "Ready":"False"
	I1007 12:09:55.209203  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:55.209231  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:55.209239  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:55.209245  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:55.212768  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:55.708667  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:55.708695  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:55.708703  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:55.708707  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:55.712313  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.209354  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:56.209383  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.209395  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.209403  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.213377  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.708881  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:56.708908  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.708919  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.708924  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.712370  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.712935  401591 node_ready.go:49] node "ha-628553-m03" has status "Ready":"True"
	I1007 12:09:56.712963  401591 node_ready.go:38] duration metric: took 15.504655916s for node "ha-628553-m03" to be "Ready" ...
	I1007 12:09:56.712977  401591 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:09:56.713073  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:09:56.713085  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.713097  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.713103  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.718978  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:09:56.726344  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.726456  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:09:56.726466  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.726474  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.726490  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.730546  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:56.731604  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.731626  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.731635  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.731641  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.735028  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.735631  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.735652  401591 pod_ready.go:82] duration metric: took 9.273238ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.735664  401591 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.735733  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:09:56.735741  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.735750  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.735755  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.739406  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.740176  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.740199  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.740209  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.740214  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.743560  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.744246  401591 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.744282  401591 pod_ready.go:82] duration metric: took 8.60988ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.744297  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.744377  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:09:56.744385  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.744394  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.744399  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.747762  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.748602  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:56.748620  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.748631  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.748635  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.751819  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.752620  401591 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.752643  401591 pod_ready.go:82] duration metric: took 8.33893ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.752653  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.752721  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:09:56.752728  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.752736  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.752744  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.755841  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:56.756900  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:56.756919  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.756928  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.756933  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.762051  401591 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:09:56.762546  401591 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:56.762567  401591 pod_ready.go:82] duration metric: took 9.907016ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.762577  401591 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:56.908942  401591 request.go:632] Waited for 146.263139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:09:56.909015  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:09:56.909020  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:56.909028  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:56.909033  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:56.912564  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.109760  401591 request.go:632] Waited for 196.38743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:57.109828  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:57.109833  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.109841  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.109845  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.113445  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.114014  401591 pod_ready.go:93] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.114033  401591 pod_ready.go:82] duration metric: took 351.449136ms for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.114057  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.309353  401591 request.go:632] Waited for 195.205622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:09:57.309419  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:09:57.309425  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.309432  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.309437  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.313075  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.509082  401591 request.go:632] Waited for 195.305317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:57.509151  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:57.509155  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.509166  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.509174  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.512625  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.513112  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.513132  401591 pod_ready.go:82] duration metric: took 399.067745ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.513143  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.709708  401591 request.go:632] Waited for 196.474408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:09:57.709781  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:09:57.709786  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.709794  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.709800  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.713831  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:57.908898  401591 request.go:632] Waited for 194.228676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:57.908982  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:57.908989  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:57.909010  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:57.909018  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:57.912443  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:57.912928  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:57.912946  401591 pod_ready.go:82] duration metric: took 399.796848ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:57.912957  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.109126  401591 request.go:632] Waited for 196.089672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:09:58.109228  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:09:58.109239  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.109254  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.109263  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.113302  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:58.309458  401591 request.go:632] Waited for 195.377342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:58.309526  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:58.309532  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.309540  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.309547  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.313264  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.313917  401591 pod_ready.go:93] pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:58.313941  401591 pod_ready.go:82] duration metric: took 400.976971ms for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.313953  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.508886  401591 request.go:632] Waited for 194.833329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:09:58.508952  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:09:58.508957  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.508965  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.508968  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.512699  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.709582  401591 request.go:632] Waited for 196.246847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:58.709646  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:09:58.709651  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.709659  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.709664  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.713267  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:58.713852  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:58.713872  401591 pod_ready.go:82] duration metric: took 399.911675ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.713882  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:58.909557  401591 request.go:632] Waited for 195.589727ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:09:58.909638  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:09:58.909646  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:58.909658  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:58.909667  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:58.913323  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.109300  401591 request.go:632] Waited for 195.248412ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:59.109385  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:09:59.109397  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.109413  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.109423  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.113724  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.114391  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.114424  401591 pod_ready.go:82] duration metric: took 400.532344ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.114440  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.309421  401591 request.go:632] Waited for 194.863237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:09:59.309496  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:09:59.309505  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.309513  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.309517  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.313524  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.509863  401591 request.go:632] Waited for 195.376113ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.509933  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.509939  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.509947  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.509952  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.514238  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.514980  401591 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.515006  401591 pod_ready.go:82] duration metric: took 400.556348ms for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.515021  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.708902  401591 request.go:632] Waited for 193.788377ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:09:59.708979  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:09:59.708984  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.708994  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.708999  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.713254  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:09:59.909528  401591 request.go:632] Waited for 195.290175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.909618  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:09:59.909629  401591 round_trippers.go:469] Request Headers:
	I1007 12:09:59.909647  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:09:59.909670  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:09:59.913334  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:09:59.913821  401591 pod_ready.go:93] pod "kube-proxy-956k4" in "kube-system" namespace has status "Ready":"True"
	I1007 12:09:59.913839  401591 pod_ready.go:82] duration metric: took 398.810891ms for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:09:59.913849  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.108920  401591 request.go:632] Waited for 194.960284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:10:00.108989  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:10:00.108994  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.109003  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.109008  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.112562  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.309314  401591 request.go:632] Waited for 195.880007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:00.309383  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:00.309388  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.309398  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.309402  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.312741  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.313358  401591 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:00.313387  401591 pod_ready.go:82] duration metric: took 399.529803ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.313403  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.509443  401591 request.go:632] Waited for 195.933785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:10:00.509525  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:10:00.509534  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.509546  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.509553  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.513184  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:00.709406  401591 request.go:632] Waited for 195.365479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:00.709504  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:00.709514  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.709522  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.709529  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.713607  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:00.714279  401591 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:00.714309  401591 pod_ready.go:82] duration metric: took 400.896557ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.714325  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:00.909245  401591 request.go:632] Waited for 194.818143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:10:00.909342  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:10:00.909351  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:00.909364  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:00.909371  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:00.915481  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:01.109624  401591 request.go:632] Waited for 193.409101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:01.109691  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:10:01.109697  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.109705  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.109709  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.113699  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.114360  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.114385  401591 pod_ready.go:82] duration metric: took 400.050276ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.114400  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.309693  401591 request.go:632] Waited for 195.205987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:10:01.309795  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:10:01.309803  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.309815  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.309822  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.313815  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.508909  401591 request.go:632] Waited for 194.37677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:01.508986  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:10:01.508991  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.509002  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.509007  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.512742  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:01.513256  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.513276  401591 pod_ready.go:82] duration metric: took 398.86838ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.513288  401591 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.709917  401591 request.go:632] Waited for 196.548883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:10:01.710017  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:10:01.710026  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.710034  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.710039  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.714122  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:01.909434  401591 request.go:632] Waited for 194.3948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:10:01.909513  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:10:01.909522  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.909532  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.909540  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.913611  401591 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:10:01.914046  401591 pod_ready.go:93] pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:10:01.914070  401591 pod_ready.go:82] duration metric: took 400.775584ms for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:10:01.914081  401591 pod_ready.go:39] duration metric: took 5.201089226s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:10:01.914096  401591 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:10:01.914154  401591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:10:01.933363  401591 api_server.go:72] duration metric: took 20.994747532s to wait for apiserver process to appear ...
	I1007 12:10:01.933396  401591 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:10:01.933418  401591 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:10:01.938101  401591 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:10:01.938189  401591 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:10:01.938198  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:01.938207  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:01.938213  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:01.939122  401591 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:10:01.939199  401591 api_server.go:141] control plane version: v1.31.1
	I1007 12:10:01.939214  401591 api_server.go:131] duration metric: took 5.812529ms to wait for apiserver health ...
	I1007 12:10:01.939225  401591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:10:02.109608  401591 request.go:632] Waited for 170.278268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.109688  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.109696  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.109710  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.109721  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.116583  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:02.124470  401591 system_pods.go:59] 24 kube-system pods found
	I1007 12:10:02.124519  401591 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:10:02.124524  401591 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:10:02.124528  401591 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:10:02.124532  401591 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:10:02.124537  401591 system_pods.go:61] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:10:02.124541  401591 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:10:02.124545  401591 system_pods.go:61] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:10:02.124549  401591 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:10:02.124553  401591 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:10:02.124556  401591 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:10:02.124559  401591 system_pods.go:61] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:10:02.124563  401591 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:10:02.124566  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:10:02.124569  401591 system_pods.go:61] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:10:02.124572  401591 system_pods.go:61] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:10:02.124576  401591 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:10:02.124579  401591 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:10:02.124582  401591 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:10:02.124585  401591 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:10:02.124588  401591 system_pods.go:61] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:10:02.124591  401591 system_pods.go:61] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:10:02.124594  401591 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:10:02.124597  401591 system_pods.go:61] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:10:02.124600  401591 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:10:02.124608  401591 system_pods.go:74] duration metric: took 185.374126ms to wait for pod list to return data ...
	I1007 12:10:02.124621  401591 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:10:02.309914  401591 request.go:632] Waited for 185.18335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:10:02.309989  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:10:02.309995  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.310010  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.310017  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.318042  401591 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:10:02.318207  401591 default_sa.go:45] found service account: "default"
	I1007 12:10:02.318235  401591 default_sa.go:55] duration metric: took 193.599365ms for default service account to be created ...
	I1007 12:10:02.318250  401591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:10:02.509774  401591 request.go:632] Waited for 191.420927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.509840  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:10:02.509853  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.509866  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.509875  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.516685  401591 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:10:02.523464  401591 system_pods.go:86] 24 kube-system pods found
	I1007 12:10:02.523503  401591 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:10:02.523511  401591 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:10:02.523516  401591 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:10:02.523522  401591 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:10:02.523528  401591 system_pods.go:89] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:10:02.523534  401591 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:10:02.523539  401591 system_pods.go:89] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:10:02.523573  401591 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:10:02.523579  401591 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:10:02.523585  401591 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:10:02.523591  401591 system_pods.go:89] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:10:02.523606  401591 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:10:02.523613  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:10:02.523619  401591 system_pods.go:89] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:10:02.523628  401591 system_pods.go:89] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:10:02.523634  401591 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:10:02.523640  401591 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:10:02.523651  401591 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:10:02.523657  401591 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:10:02.523662  401591 system_pods.go:89] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:10:02.523668  401591 system_pods.go:89] "kube-vip-ha-628553" [d3799e6d-af22-404d-9322-ff0c9e7fa931] Running
	I1007 12:10:02.523674  401591 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:10:02.523679  401591 system_pods.go:89] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:10:02.523685  401591 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:10:02.523697  401591 system_pods.go:126] duration metric: took 205.439551ms to wait for k8s-apps to be running ...
	I1007 12:10:02.523709  401591 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:10:02.523771  401591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:10:02.542038  401591 system_svc.go:56] duration metric: took 18.318301ms WaitForService to wait for kubelet
	I1007 12:10:02.542084  401591 kubeadm.go:582] duration metric: took 21.603472414s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:10:02.542109  401591 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:10:02.709771  401591 request.go:632] Waited for 167.539386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:10:02.709854  401591 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:10:02.709863  401591 round_trippers.go:469] Request Headers:
	I1007 12:10:02.709874  401591 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:10:02.709884  401591 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:10:02.713363  401591 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:10:02.714361  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714384  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714396  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714401  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714406  401591 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:10:02.714409  401591 node_conditions.go:123] node cpu capacity is 2
	I1007 12:10:02.714415  401591 node_conditions.go:105] duration metric: took 172.299605ms to run NodePressure ...
	I1007 12:10:02.714430  401591 start.go:241] waiting for startup goroutines ...
	I1007 12:10:02.714459  401591 start.go:255] writing updated cluster config ...
	I1007 12:10:02.714781  401591 ssh_runner.go:195] Run: rm -f paused
	I1007 12:10:02.769817  401591 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:10:02.771879  401591 out.go:177] * Done! kubectl is now configured to use "ha-628553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.753269065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303241753245062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0f4ed02e-c51c-4885-91d8-04f3456e43b3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.753897568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=191c49ff-2396-4c40-b27e-9f639c45d41b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.753985250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=191c49ff-2396-4c40-b27e-9f639c45d41b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.754218046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=191c49ff-2396-4c40-b27e-9f639c45d41b name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.795631702Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fdd715f2-0fcd-4d86-942b-ed1b9beea1e9 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.795706819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fdd715f2-0fcd-4d86-942b-ed1b9beea1e9 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.797167863Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e144afe-365f-4e5b-a6d6-5f40add2f09b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.797620300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303241797594715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e144afe-365f-4e5b-a6d6-5f40add2f09b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.798587093Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e205a58-0043-461c-87e5-97c5bcc0e850 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.798665434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e205a58-0043-461c-87e5-97c5bcc0e850 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.798969470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e205a58-0043-461c-87e5-97c5bcc0e850 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.854608756Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a19dd7a0-a7cc-4d05-ad86-af2d65ad656e name=/runtime.v1.RuntimeService/Version
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.854728647Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a19dd7a0-a7cc-4d05-ad86-af2d65ad656e name=/runtime.v1.RuntimeService/Version
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.856094382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21470c12-39cc-4ede-8b8c-bb50c4f40fe4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.856692110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303241856660262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21470c12-39cc-4ede-8b8c-bb50c4f40fe4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.857281380Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec34272f-4340-459b-a174-77aee2e08748 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.857378559Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec34272f-4340-459b-a174-77aee2e08748 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.857744663Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec34272f-4340-459b-a174-77aee2e08748 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.903326167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16d4dd44-ab2e-4372-a73c-f1cb6d2e0a3e name=/runtime.v1.RuntimeService/Version
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.903553432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16d4dd44-ab2e-4372-a73c-f1cb6d2e0a3e name=/runtime.v1.RuntimeService/Version
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.905474044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc874629-6d8f-4ee9-a9c2-481afdf8253e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.906055494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303241906029363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc874629-6d8f-4ee9-a9c2-481afdf8253e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.906685594Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0564c941-c0eb-450a-8b0c-885bfb9d8077 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.906746364Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0564c941-c0eb-450a-8b0c-885bfb9d8077 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:14:01 ha-628553 crio[670]: time="2024-10-07 12:14:01.907062444Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cac09519e9d839226a39444abd2043b6f19fc10ec7b4bc9adda7f33b183402eb,PodSandboxId:3588af1ea926cb139e0494a685d676d27bdcf36be1224f7503e0e7bc7e35ac8d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303007095363171,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:914d5a55b5b7f365cfab01c06df205e9df51e78f22b2e7e63987150f7321a637,PodSandboxId:e4273414ae3c991bd1b2869c8916108bd07b11ecd715e221b1649691f8c57a39,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728302865051238243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed,PodSandboxId:7a74be057c048bd0bf9c2836deaa311995875189b2cbd21d9d8a90387a3e8ce0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302865037164514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68,PodSandboxId:66f721a704d2d039f0ef6c8b7b9cd6e609e0a61f012994b5b89a3cee4b52a113,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728302864973475798,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-54
07-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e,PodSandboxId:883a1bf7435de112cb34c607c0fe688a3118731c66ef2957cae35a3d557b0d40,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17283028
53029005413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489,PodSandboxId:4ad2a2a2eae50620a3b1dbab780fb2370d66a903d378e49a46d0f5f7c9eceec2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728302852610463906,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41e1b6a8666620d5da5f6fe487934a9338e38cfda3892d71054bd4abf4dc5bf1,PodSandboxId:9107fefdb6ecabd1263b85bf8795f24469859628de84869b640a0dea27705ef8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728302843975480101,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66183128b21172d80a580f972f2b00a0,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969,PodSandboxId:e611d474900bca4b8c16e8dc3b224f95bc8b46150205733e8cb2ea8d7e5d4319,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728302841624947086,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4,PodSandboxId:adfc5c5b9565a6498ad1c727673f58b97fa84c533341c73997f50b6498b6db54,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728302841559294340,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-
ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee,PodSandboxId:ce8ef37c98c4f4509d3fe3059c07d02766eb8e2b154c214a1f794e1fa89d71cc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728302841497183886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-ma
nager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544,PodSandboxId:923ba0f2be0022c44988b7380e3629766786b9259b290d95333b55de7ffcc267,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728302841423447433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0564c941-c0eb-450a-8b0c-885bfb9d8077 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cac09519e9d83       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   3588af1ea926c       busybox-7dff88458-vc5k8
	914d5a55b5b7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   e4273414ae3c9       storage-provisioner
	4dcac83715ae5       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   7a74be057c048       coredns-7c65d6cfc9-rsr6v
	0a438e52c0996       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   66f721a704d2d       coredns-7c65d6cfc9-ktmzq
	b10875321ed8d       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   883a1bf7435de       kindnet-snf5v
	4a0b203aaca5a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   4ad2a2a2eae50       kube-proxy-h6vg8
	41e1b6a866662       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   9107fefdb6eca       kube-vip-ha-628553
	02649d86a8d5c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   e611d474900bc       etcd-ha-628553
	1a3ce3a4cad16       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   adfc5c5b9565a       kube-scheduler-ha-628553
	73e39c7d2b39b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   ce8ef37c98c4f       kube-controller-manager-ha-628553
	919f5b2c17a09       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   923ba0f2be002       kube-apiserver-ha-628553
	
	
	==> coredns [0a438e52c0996eeab8ba029103373e78ced9c323664eb125dfec34b846183f68] <==
	[INFO] 10.244.1.2:59173 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004406792s
	[INFO] 10.244.1.2:44478 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000424413s
	[INFO] 10.244.1.2:58960 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000183491s
	[INFO] 10.244.1.3:35630 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000291506s
	[INFO] 10.244.1.3:42806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002399052s
	[INFO] 10.244.1.3:42397 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126644s
	[INFO] 10.244.1.3:34571 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001931949s
	[INFO] 10.244.1.3:54485 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000378487s
	[INFO] 10.244.1.3:58977 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105091s
	[INFO] 10.244.0.4:38892 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002053345s
	[INFO] 10.244.0.4:58836 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000172655s
	[INFO] 10.244.0.4:55251 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065314s
	[INFO] 10.244.0.4:53436 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001570291s
	[INFO] 10.244.0.4:48063 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00004804s
	[INFO] 10.244.1.2:57025 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153957s
	[INFO] 10.244.1.2:40431 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012349s
	[INFO] 10.244.1.3:37153 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139765s
	[INFO] 10.244.1.3:45214 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157416s
	[INFO] 10.244.1.3:47978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094264s
	[INFO] 10.244.0.4:57791 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080137s
	[INFO] 10.244.1.2:51888 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000215918s
	[INFO] 10.244.1.2:42893 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000166709s
	[INFO] 10.244.1.3:36056 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000172229s
	[INFO] 10.244.1.3:44744 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000113708s
	[INFO] 10.244.0.4:56467 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000102183s
	
	
	==> coredns [4dcac83715ae5c3891812064f8de21006bc4aaf38a204c245cc256baacdb04ed] <==
	[INFO] 10.244.1.3:51613 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000585499s
	[INFO] 10.244.1.3:40629 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001993531s
	[INFO] 10.244.0.4:40285 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000080316s
	[INFO] 10.244.1.2:53385 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200211s
	[INFO] 10.244.1.2:46841 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.028903254s
	[INFO] 10.244.1.2:36156 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000295572s
	[INFO] 10.244.1.2:46979 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159813s
	[INFO] 10.244.1.3:47839 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190478s
	[INFO] 10.244.1.3:55618 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000314649s
	[INFO] 10.244.0.4:52728 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150624s
	[INFO] 10.244.0.4:42394 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090784s
	[INFO] 10.244.0.4:57656 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107027s
	[INFO] 10.244.1.2:36030 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124775s
	[INFO] 10.244.1.2:57899 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082756s
	[INFO] 10.244.1.3:44889 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000195326s
	[INFO] 10.244.0.4:59043 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137163s
	[INFO] 10.244.0.4:52080 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217774s
	[INFO] 10.244.0.4:40645 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102774s
	[INFO] 10.244.1.2:59521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150669s
	[INFO] 10.244.1.2:34929 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000205398s
	[INFO] 10.244.1.3:50337 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185196s
	[INFO] 10.244.1.3:51645 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000242498s
	[INFO] 10.244.0.4:58847 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134448s
	[INFO] 10.244.0.4:51647 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000147028s
	[INFO] 10.244.0.4:54351 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000131375s
	
	
	==> describe nodes <==
	Name:               ha-628553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_07_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:10:31 +0000   Mon, 07 Oct 2024 12:07:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-628553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a13f7b7982a74b9eb8f82488f9c3d1a6
	  System UUID:                a13f7b79-82a7-4b9e-b8f8-2488f9c3d1a6
	  Boot ID:                    288ea8ab-36c4-4d6a-9093-1f2ac800cc46
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vc5k8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 coredns-7c65d6cfc9-ktmzq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m30s
	  kube-system                 coredns-7c65d6cfc9-rsr6v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m30s
	  kube-system                 etcd-ha-628553                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m34s
	  kube-system                 kindnet-snf5v                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m30s
	  kube-system                 kube-apiserver-ha-628553             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-ha-628553    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-proxy-h6vg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-scheduler-ha-628553             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-vip-ha-628553                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m28s  kube-proxy       
	  Normal  Starting                 6m35s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m34s  kubelet          Node ha-628553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s  kubelet          Node ha-628553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s  kubelet          Node ha-628553 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m31s  node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  NodeReady                6m18s  kubelet          Node ha-628553 status is now: NodeReady
	  Normal  RegisteredNode           5m31s  node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  RegisteredNode           4m17s  node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	
	
	Name:               ha-628553-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_08_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:08:22 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:11:16 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Oct 2024 12:10:24 +0000   Mon, 07 Oct 2024 12:11:56 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-628553-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ba9ae7572f54f4ab8de307b6e86da52
	  System UUID:                4ba9ae75-72f5-4f4a-b8de-307b6e86da52
	  Boot ID:                    30fbb024-4877-4642-abd8-af8d3d30f079
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-75ng4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  default                     busybox-7dff88458-jhmrp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 etcd-ha-628553-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m38s
	  kube-system                 kindnet-9rq2w                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m40s
	  kube-system                 kube-apiserver-ha-628553-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-controller-manager-ha-628553-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-proxy-s5c6d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-scheduler-ha-628553-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-vip-ha-628553-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m40s)  kubelet          Node ha-628553-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m40s)  kubelet          Node ha-628553-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x7 over 5m40s)  kubelet          Node ha-628553-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m36s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  NodeNotReady             2m6s                   node-controller  Node ha-628553-m02 status is now: NodeNotReady
	
	
	Name:               ha-628553-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_09_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:09:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:14:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:10:07 +0000   Mon, 07 Oct 2024 12:09:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-628553-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aab92960db1b4070940c89c6ff930351
	  System UUID:                aab92960-db1b-4070-940c-89c6ff930351
	  Boot ID:                    77629bba-9229-47e7-80cf-730097c43666
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-628553-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m23s
	  kube-system                 kindnet-sb4xd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m26s
	  kube-system                 kube-apiserver-ha-628553-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-controller-manager-ha-628553-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-956k4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-scheduler-ha-628553-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-vip-ha-628553-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m21s                  kube-proxy       
	  Normal  RegisteredNode           4m26s                  node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node ha-628553-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	
	
	Name:               ha-628553-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_10_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:10:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:13:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:10:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:11:12 +0000   Mon, 07 Oct 2024 12:11:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    ha-628553-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7e249f18a3f466abcbb6b94b02ed2ec
	  System UUID:                b7e249f1-8a3f-466a-bcbb-6b94b02ed2ec
	  Boot ID:                    dd833219-3ee8-4ed9-aae9-d441f250fa96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwk2r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m21s
	  kube-system                 kube-proxy-fkzqr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m21s (x2 over 3m21s)  kubelet          Node ha-628553-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m21s (x2 over 3m21s)  kubelet          Node ha-628553-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m21s (x2 over 3m21s)  kubelet          Node ha-628553-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m20s                  node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal  NodeReady                3m1s                   kubelet          Node ha-628553-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 12:06] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051409] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040490] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.878273] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.715451] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Oct 7 12:07] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.378547] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.061855] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066201] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.180086] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.153013] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.284998] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.180207] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +4.207557] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.061569] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.415206] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.085223] kauditd_printk_skb: 79 callbacks suppressed
	[  +4.998659] kauditd_printk_skb: 26 callbacks suppressed
	[ +12.170600] kauditd_printk_skb: 33 callbacks suppressed
	[Oct 7 12:08] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [02649d86a8d5c0ddb0e88749bd5c987dddc242275140ec2cd3e4de6b73284969] <==
	{"level":"warn","ts":"2024-10-07T12:14:02.190810Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.191686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.193489Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.203261Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.208176Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.217227Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.224185Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.231604Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.236306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.240600Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.247479Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.254345Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.260969Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.266397Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.270156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.279286Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.286677Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.289985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.295237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.300238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.304291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.309585Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.317369Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.324289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-07T12:14:02.390156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fbb007bab925a598","from":"fbb007bab925a598","remote-peer-id":"c4e3087522f8e2e6","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:14:02 up 7 min,  0 users,  load average: 0.40, 0.29, 0.15
	Linux ha-628553 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b10875321ed8dbc68145fbc9533e16e2914429ff394fb768517d97899d523b0e] <==
	I1007 12:13:24.296384       1 main.go:299] handling current node
	I1007 12:13:34.285463       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:34.285588       1 main.go:299] handling current node
	I1007 12:13:34.285620       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:34.285640       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:34.285850       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:34.285880       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:34.285943       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:34.285960       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:44.285393       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:44.285467       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:44.285666       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:44.285751       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:44.285880       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:44.285904       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:44.285950       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:44.285956       1 main.go:299] handling current node
	I1007 12:13:54.294585       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:13:54.294702       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:13:54.294938       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:13:54.294972       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:13:54.295048       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:13:54.295074       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:13:54.295150       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:13:54.295172       1 main.go:299] handling current node
	
	
	==> kube-apiserver [919f5b2c17a091997f1d300391f3c5e05df04072f5abdf7aba7cf2c6f89dd544] <==
	I1007 12:07:27.794940       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1007 12:07:27.933633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1007 12:07:32.075355       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1007 12:07:32.486677       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1007 12:08:23.102352       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.102586       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 15.764µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1007 12:08:23.104149       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.105567       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1007 12:08:23.106920       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.674679ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1007 12:10:08.360356       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40292: use of closed network connection
	E1007 12:10:08.561113       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40308: use of closed network connection
	E1007 12:10:08.787138       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40330: use of closed network connection
	E1007 12:10:09.028668       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40344: use of closed network connection
	E1007 12:10:09.244263       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40368: use of closed network connection
	E1007 12:10:09.466935       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40384: use of closed network connection
	E1007 12:10:09.660058       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40410: use of closed network connection
	E1007 12:10:09.852210       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40416: use of closed network connection
	E1007 12:10:10.061165       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40432: use of closed network connection
	E1007 12:10:10.408420       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40450: use of closed network connection
	E1007 12:10:10.612165       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40466: use of closed network connection
	E1007 12:10:10.805485       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40472: use of closed network connection
	E1007 12:10:10.999177       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40496: use of closed network connection
	E1007 12:10:11.210763       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40502: use of closed network connection
	E1007 12:10:11.463496       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:40532: use of closed network connection
	W1007 12:11:36.878261       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110 192.168.39.149]
	
	
	==> kube-controller-manager [73e39c7d2b39bc1c3ef117bac16aa56691270de7ba2a3336df4bb775d0cbc0ee] <==
	I1007 12:10:41.965922       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.001526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.152486       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.245459       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:42.660674       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:45.679644       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:45.726419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:46.774324       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-628553-m04"
	I1007 12:10:46.775093       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:46.796998       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:10:52.359490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:01.889908       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-628553-m04"
	I1007 12:11:01.891629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:01.908947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:02.079930       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:12.784052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:11:56.797865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-628553-m04"
	I1007 12:11:56.798196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:11:56.825210       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:11:56.976985       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.040351ms"
	I1007 12:11:56.977093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.478µs"
	I1007 12:11:57.005615       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.252446ms"
	I1007 12:11:57.005705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.783µs"
	I1007 12:12:00.745939       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:12:02.094451       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	
	
	==> kube-proxy [4a0b203aaca5a29f0fd90ab4a872973750fbc2d616e5bd498c1b88de89f71489] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 12:07:33.298365       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 12:07:33.336456       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E1007 12:07:33.336571       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:07:33.434284       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 12:07:33.434331       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 12:07:33.434355       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:07:33.445592       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:07:33.454423       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:07:33.454444       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:07:33.463602       1 config.go:199] "Starting service config controller"
	I1007 12:07:33.467216       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:07:33.467268       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:07:33.467274       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:07:33.472850       1 config.go:328] "Starting node config controller"
	I1007 12:07:33.472863       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:07:33.568004       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 12:07:33.568062       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:07:33.573613       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1a3ce3a4cad16d14715220abc869524505a926262559ec8be18702fae8708ac4] <==
	E1007 12:07:26.382246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:07:26.387024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 12:07:26.387119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:07:26.410415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 12:07:26.410570       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 12:07:27.604975       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 12:10:03.714499       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="38d0a2a6-0d77-403c-86e7-405837d8ca25" pod="default/busybox-7dff88458-jhmrp" assumedNode="ha-628553-m02" currentNode="ha-628553-m03"
	E1007 12:10:03.740391       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jhmrp\": pod busybox-7dff88458-jhmrp is already assigned to node \"ha-628553-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-jhmrp" node="ha-628553-m03"
	E1007 12:10:03.743143       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 38d0a2a6-0d77-403c-86e7-405837d8ca25(default/busybox-7dff88458-jhmrp) was assumed on ha-628553-m03 but assigned to ha-628553-m02" pod="default/busybox-7dff88458-jhmrp"
	E1007 12:10:03.745165       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-jhmrp\": pod busybox-7dff88458-jhmrp is already assigned to node \"ha-628553-m02\"" pod="default/busybox-7dff88458-jhmrp"
	I1007 12:10:03.747831       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-jhmrp" node="ha-628553-m02"
	E1007 12:10:03.791061       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vc5k8\": pod busybox-7dff88458-vc5k8 is already assigned to node \"ha-628553\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-vc5k8" node="ha-628553-m03"
	E1007 12:10:03.791192       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-vc5k8\": pod busybox-7dff88458-vc5k8 is already assigned to node \"ha-628553\"" pod="default/busybox-7dff88458-vc5k8"
	E1007 12:10:03.910449       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-47zsz\": pod busybox-7dff88458-47zsz is already assigned to node \"ha-628553-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-47zsz" node="ha-628553-m03"
	E1007 12:10:03.910515       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 674a626e-9fe6-4875-a34f-cc4d729e2bb1(default/busybox-7dff88458-47zsz) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-47zsz"
	E1007 12:10:03.910531       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-47zsz\": pod busybox-7dff88458-47zsz is already assigned to node \"ha-628553-m03\"" pod="default/busybox-7dff88458-47zsz"
	I1007 12:10:03.910555       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-47zsz" node="ha-628553-m03"
	E1007 12:10:42.040635       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rwk2r\": pod kindnet-rwk2r is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-rwk2r" node="ha-628553-m04"
	E1007 12:10:42.042987       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rwk2r\": pod kindnet-rwk2r is already assigned to node \"ha-628553-m04\"" pod="kube-system/kindnet-rwk2r"
	E1007 12:10:42.079633       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-kl4j4\": pod kindnet-kl4j4 is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-kl4j4" node="ha-628553-m04"
	E1007 12:10:42.079724       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 244c4da8-46b7-4627-a7ad-60e7ff405b0a(kube-system/kindnet-kl4j4) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-kl4j4"
	E1007 12:10:42.079846       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-kl4j4\": pod kindnet-kl4j4 is already assigned to node \"ha-628553-m04\"" pod="kube-system/kindnet-kl4j4"
	I1007 12:10:42.079871       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-kl4j4" node="ha-628553-m04"
	E1007 12:10:42.086167       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-g2fwp\": pod kube-proxy-g2fwp is already assigned to node \"ha-628553-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-g2fwp" node="ha-628553-m04"
	E1007 12:10:42.086272       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-g2fwp\": pod kube-proxy-g2fwp is already assigned to node \"ha-628553-m04\"" pod="kube-system/kube-proxy-g2fwp"
	
	
	==> kubelet <==
	Oct 07 12:12:28 ha-628553 kubelet[1314]: E1007 12:12:28.044744    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303148044534034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:28 ha-628553 kubelet[1314]: E1007 12:12:28.044838    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303148044534034,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:38 ha-628553 kubelet[1314]: E1007 12:12:38.050523    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303158047005260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:38 ha-628553 kubelet[1314]: E1007 12:12:38.051561    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303158047005260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:48 ha-628553 kubelet[1314]: E1007 12:12:48.053900    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303168053449361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:48 ha-628553 kubelet[1314]: E1007 12:12:48.053963    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303168053449361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:58 ha-628553 kubelet[1314]: E1007 12:12:58.055856    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303178055537621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:12:58 ha-628553 kubelet[1314]: E1007 12:12:58.055895    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303178055537621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:08 ha-628553 kubelet[1314]: E1007 12:13:08.057102    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303188056723208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:08 ha-628553 kubelet[1314]: E1007 12:13:08.057351    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303188056723208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:18 ha-628553 kubelet[1314]: E1007 12:13:18.061478    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303198060609364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:18 ha-628553 kubelet[1314]: E1007 12:13:18.061853    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303198060609364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:27 ha-628553 kubelet[1314]: E1007 12:13:27.990111    1314 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:13:27 ha-628553 kubelet[1314]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:13:27 ha-628553 kubelet[1314]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:13:28 ha-628553 kubelet[1314]: E1007 12:13:28.063998    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303208063333958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:28 ha-628553 kubelet[1314]: E1007 12:13:28.064098    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303208063333958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:38 ha-628553 kubelet[1314]: E1007 12:13:38.066580    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303218065435839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:38 ha-628553 kubelet[1314]: E1007 12:13:38.066632    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303218065435839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:48 ha-628553 kubelet[1314]: E1007 12:13:48.067728    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303228067468647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:48 ha-628553 kubelet[1314]: E1007 12:13:48.067868    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303228067468647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:58 ha-628553 kubelet[1314]: E1007 12:13:58.068851    1314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303238068527943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:13:58 ha-628553 kubelet[1314]: E1007 12:13:58.068891    1314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303238068527943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-628553 -n ha-628553
helpers_test.go:261: (dbg) Run:  kubectl --context ha-628553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (619.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-628553 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-628553 -v=7 --alsologtostderr
E1007 12:14:26.322406  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:15:01.380193  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-628553 -v=7 --alsologtostderr: exit status 82 (2m22.65284314s)

                                                
                                                
-- stdout --
	* Stopping node "ha-628553-m04"  ...
	* Stopping node "ha-628553-m03"  ...
	* Stopping node "ha-628553-m02"  ...
	* Stopping node "ha-628553"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:14:03.464444  406831 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:14:03.464563  406831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:14:03.464570  406831 out.go:358] Setting ErrFile to fd 2...
	I1007 12:14:03.464574  406831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:14:03.464767  406831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:14:03.465001  406831 out.go:352] Setting JSON to false
	I1007 12:14:03.465095  406831 mustload.go:65] Loading cluster: ha-628553
	I1007 12:14:03.465451  406831 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:14:03.465552  406831 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:14:03.465736  406831 mustload.go:65] Loading cluster: ha-628553
	I1007 12:14:03.465867  406831 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:14:03.465897  406831 stop.go:39] StopHost: ha-628553-m04
	I1007 12:14:03.466230  406831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:14:03.466274  406831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:14:03.482158  406831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I1007 12:14:03.482647  406831 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:14:03.483220  406831 main.go:141] libmachine: Using API Version  1
	I1007 12:14:03.483251  406831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:14:03.483618  406831 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:14:03.486202  406831 out.go:177] * Stopping node "ha-628553-m04"  ...
	I1007 12:14:03.487524  406831 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 12:14:03.487566  406831 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:14:03.487788  406831 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 12:14:03.487831  406831 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:14:03.490499  406831 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:14:03.490908  406831 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:10:27 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:14:03.490944  406831 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:14:03.491084  406831 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:14:03.491264  406831 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:14:03.491423  406831 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:14:03.491548  406831 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:14:03.580488  406831 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 12:14:03.636889  406831 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 12:14:03.693018  406831 main.go:141] libmachine: Stopping "ha-628553-m04"...
	I1007 12:14:03.693059  406831 main.go:141] libmachine: (ha-628553-m04) Calling .GetState
	I1007 12:14:03.694748  406831 main.go:141] libmachine: (ha-628553-m04) Calling .Stop
	I1007 12:14:03.698921  406831 main.go:141] libmachine: (ha-628553-m04) Waiting for machine to stop 0/120
	I1007 12:14:04.912912  406831 main.go:141] libmachine: (ha-628553-m04) Calling .GetState
	I1007 12:14:04.914098  406831 main.go:141] libmachine: Machine "ha-628553-m04" was stopped.
	I1007 12:14:04.914117  406831 stop.go:75] duration metric: took 1.426595611s to stop
	I1007 12:14:04.914137  406831 stop.go:39] StopHost: ha-628553-m03
	I1007 12:14:04.914449  406831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:14:04.914496  406831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:14:04.930029  406831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
	I1007 12:14:04.930569  406831 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:14:04.931175  406831 main.go:141] libmachine: Using API Version  1
	I1007 12:14:04.931199  406831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:14:04.931587  406831 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:14:04.934543  406831 out.go:177] * Stopping node "ha-628553-m03"  ...
	I1007 12:14:04.936259  406831 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 12:14:04.936304  406831 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:14:04.936623  406831 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 12:14:04.936651  406831 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:14:04.940490  406831 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:14:04.941075  406831 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:09:02 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:14:04.941110  406831 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:14:04.941324  406831 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:14:04.941556  406831 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:14:04.941734  406831 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:14:04.941909  406831 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:14:05.034340  406831 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 12:14:05.090187  406831 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 12:14:05.146730  406831 main.go:141] libmachine: Stopping "ha-628553-m03"...
	I1007 12:14:05.146759  406831 main.go:141] libmachine: (ha-628553-m03) Calling .GetState
	I1007 12:14:05.148548  406831 main.go:141] libmachine: (ha-628553-m03) Calling .Stop
	I1007 12:14:05.152696  406831 main.go:141] libmachine: (ha-628553-m03) Waiting for machine to stop 0/120
	I1007 12:14:06.154740  406831 main.go:141] libmachine: (ha-628553-m03) Waiting for machine to stop 1/120
	I1007 12:14:07.156326  406831 main.go:141] libmachine: (ha-628553-m03) Waiting for machine to stop 2/120
	I1007 12:14:08.158901  406831 main.go:141] libmachine: (ha-628553-m03) Calling .GetState
	I1007 12:14:08.160198  406831 main.go:141] libmachine: Machine "ha-628553-m03" was stopped.
	I1007 12:14:08.160220  406831 stop.go:75] duration metric: took 3.22396722s to stop
	I1007 12:14:08.160239  406831 stop.go:39] StopHost: ha-628553-m02
	I1007 12:14:08.160590  406831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:14:08.160643  406831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:14:08.176402  406831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37221
	I1007 12:14:08.176926  406831 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:14:08.177568  406831 main.go:141] libmachine: Using API Version  1
	I1007 12:14:08.177591  406831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:14:08.177934  406831 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:14:08.180979  406831 out.go:177] * Stopping node "ha-628553-m02"  ...
	I1007 12:14:08.182275  406831 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 12:14:08.182320  406831 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:14:08.182623  406831 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 12:14:08.182649  406831 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:14:08.185976  406831 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:14:08.186424  406831 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:49 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:14:08.186445  406831 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:14:08.186639  406831 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:14:08.186820  406831 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:14:08.187002  406831 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:14:08.187126  406831 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	W1007 12:14:11.255234  406831 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.169:22: connect: no route to host
	W1007 12:14:11.255362  406831 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.39.169:22: connect: no route to host
	I1007 12:14:11.255395  406831 main.go:141] libmachine: Stopping "ha-628553-m02"...
	I1007 12:14:11.255407  406831 main.go:141] libmachine: (ha-628553-m02) Calling .GetState
	I1007 12:14:11.257086  406831 main.go:141] libmachine: (ha-628553-m02) Calling .Stop
	I1007 12:14:11.260953  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 0/120
	I1007 12:14:12.262520  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 1/120
	I1007 12:14:13.264880  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 2/120
	I1007 12:14:14.266719  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 3/120
	I1007 12:14:15.268180  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 4/120
	I1007 12:14:16.269731  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 5/120
	I1007 12:14:17.271469  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 6/120
	I1007 12:14:18.273673  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 7/120
	I1007 12:14:19.275355  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 8/120
	I1007 12:14:20.277526  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 9/120
	I1007 12:14:21.279045  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 10/120
	I1007 12:14:22.280644  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 11/120
	I1007 12:14:23.282440  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 12/120
	I1007 12:14:24.283905  406831 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 13/120
	I1007 12:14:25.615664  406831 main.go:141] libmachine: (ha-628553-m02) Calling .GetState
	I1007 12:14:25.617107  406831 main.go:141] libmachine: Machine "ha-628553-m02" was stopped.
	I1007 12:14:25.617127  406831 stop.go:75] duration metric: took 17.434859272s to stop
	I1007 12:14:25.617147  406831 stop.go:39] StopHost: ha-628553
	I1007 12:14:25.617450  406831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:14:25.617492  406831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:14:25.632813  406831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I1007 12:14:25.633405  406831 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:14:25.634045  406831 main.go:141] libmachine: Using API Version  1
	I1007 12:14:25.634085  406831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:14:25.634453  406831 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:14:25.636916  406831 out.go:177] * Stopping node "ha-628553"  ...
	I1007 12:14:25.638600  406831 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 12:14:25.638629  406831 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:14:25.638880  406831 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 12:14:25.638910  406831 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:14:25.641981  406831 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:14:25.642537  406831 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:14:25.642569  406831 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:14:25.642795  406831 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:14:25.643046  406831 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:14:25.643224  406831 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:14:25.643398  406831 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:14:25.734306  406831 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 12:14:25.796136  406831 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 12:14:25.857780  406831 main.go:141] libmachine: Stopping "ha-628553"...
	I1007 12:14:25.857816  406831 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:14:25.859570  406831 main.go:141] libmachine: (ha-628553) Calling .Stop
	I1007 12:14:25.863484  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 0/120
	I1007 12:14:26.865065  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 1/120
	I1007 12:14:27.866565  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 2/120
	I1007 12:14:28.868170  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 3/120
	I1007 12:14:29.869805  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 4/120
	I1007 12:14:30.871580  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 5/120
	I1007 12:14:31.873161  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 6/120
	I1007 12:14:32.874807  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 7/120
	I1007 12:14:33.876323  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 8/120
	I1007 12:14:34.877942  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 9/120
	I1007 12:14:35.879689  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 10/120
	I1007 12:14:36.881286  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 11/120
	I1007 12:14:37.882674  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 12/120
	I1007 12:14:38.884176  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 13/120
	I1007 12:14:39.885626  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 14/120
	I1007 12:14:40.887080  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 15/120
	I1007 12:14:41.888673  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 16/120
	I1007 12:14:42.890196  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 17/120
	I1007 12:14:43.891826  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 18/120
	I1007 12:14:44.893472  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 19/120
	I1007 12:14:45.895968  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 20/120
	I1007 12:14:46.897950  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 21/120
	I1007 12:14:47.899598  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 22/120
	I1007 12:14:48.901036  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 23/120
	I1007 12:14:49.902610  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 24/120
	I1007 12:14:50.904546  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 25/120
	I1007 12:14:51.906390  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 26/120
	I1007 12:14:52.907968  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 27/120
	I1007 12:14:53.909601  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 28/120
	I1007 12:14:54.911113  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 29/120
	I1007 12:14:55.912398  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 30/120
	I1007 12:14:56.914072  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 31/120
	I1007 12:14:57.915708  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 32/120
	I1007 12:14:58.917351  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 33/120
	I1007 12:14:59.918933  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 34/120
	I1007 12:15:00.920405  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 35/120
	I1007 12:15:01.922077  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 36/120
	I1007 12:15:02.923775  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 37/120
	I1007 12:15:03.925318  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 38/120
	I1007 12:15:04.926885  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 39/120
	I1007 12:15:05.928252  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 40/120
	I1007 12:15:06.929711  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 41/120
	I1007 12:15:07.931273  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 42/120
	I1007 12:15:08.932685  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 43/120
	I1007 12:15:09.934056  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 44/120
	I1007 12:15:10.935749  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 45/120
	I1007 12:15:11.937331  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 46/120
	I1007 12:15:12.939086  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 47/120
	I1007 12:15:13.940867  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 48/120
	I1007 12:15:14.942587  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 49/120
	I1007 12:15:15.945005  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 50/120
	I1007 12:15:16.946513  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 51/120
	I1007 12:15:17.948256  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 52/120
	I1007 12:15:18.949667  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 53/120
	I1007 12:15:19.951277  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 54/120
	I1007 12:15:20.953413  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 55/120
	I1007 12:15:21.955174  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 56/120
	I1007 12:15:22.956867  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 57/120
	I1007 12:15:23.958547  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 58/120
	I1007 12:15:24.960100  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 59/120
	I1007 12:15:25.961927  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 60/120
	I1007 12:15:26.963528  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 61/120
	I1007 12:15:27.965420  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 62/120
	I1007 12:15:28.966997  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 63/120
	I1007 12:15:29.968647  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 64/120
	I1007 12:15:30.970528  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 65/120
	I1007 12:15:31.971997  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 66/120
	I1007 12:15:32.974242  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 67/120
	I1007 12:15:33.975860  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 68/120
	I1007 12:15:34.977717  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 69/120
	I1007 12:15:35.979709  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 70/120
	I1007 12:15:36.981207  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 71/120
	I1007 12:15:37.983074  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 72/120
	I1007 12:15:38.984582  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 73/120
	I1007 12:15:39.986218  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 74/120
	I1007 12:15:40.988385  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 75/120
	I1007 12:15:41.990069  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 76/120
	I1007 12:15:42.991629  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 77/120
	I1007 12:15:43.993098  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 78/120
	I1007 12:15:44.994468  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 79/120
	I1007 12:15:45.996251  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 80/120
	I1007 12:15:46.997778  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 81/120
	I1007 12:15:47.999273  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 82/120
	I1007 12:15:49.001867  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 83/120
	I1007 12:15:50.003397  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 84/120
	I1007 12:15:51.005395  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 85/120
	I1007 12:15:52.006922  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 86/120
	I1007 12:15:53.008483  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 87/120
	I1007 12:15:54.010185  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 88/120
	I1007 12:15:55.011774  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 89/120
	I1007 12:15:56.013627  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 90/120
	I1007 12:15:57.015273  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 91/120
	I1007 12:15:58.016659  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 92/120
	I1007 12:15:59.018110  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 93/120
	I1007 12:16:00.019457  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 94/120
	I1007 12:16:01.021282  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 95/120
	I1007 12:16:02.022625  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 96/120
	I1007 12:16:03.024591  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 97/120
	I1007 12:16:04.026486  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 98/120
	I1007 12:16:05.028162  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 99/120
	I1007 12:16:06.029604  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 100/120
	I1007 12:16:07.030874  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 101/120
	I1007 12:16:08.032556  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 102/120
	I1007 12:16:09.034034  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 103/120
	I1007 12:16:10.035415  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 104/120
	I1007 12:16:11.037351  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 105/120
	I1007 12:16:12.038785  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 106/120
	I1007 12:16:13.040297  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 107/120
	I1007 12:16:14.041671  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 108/120
	I1007 12:16:15.043305  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 109/120
	I1007 12:16:16.045021  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 110/120
	I1007 12:16:17.046351  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 111/120
	I1007 12:16:18.047675  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 112/120
	I1007 12:16:19.049007  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 113/120
	I1007 12:16:20.050606  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 114/120
	I1007 12:16:21.052666  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 115/120
	I1007 12:16:22.054188  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 116/120
	I1007 12:16:23.055512  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 117/120
	I1007 12:16:24.056952  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 118/120
	I1007 12:16:25.058700  406831 main.go:141] libmachine: (ha-628553) Waiting for machine to stop 119/120
	I1007 12:16:26.059747  406831 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 12:16:26.059830  406831 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1007 12:16:26.062032  406831 out.go:201] 
	W1007 12:16:26.063517  406831 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1007 12:16:26.063532  406831 out.go:270] * 
	* 
	W1007 12:16:26.066501  406831 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 12:16:26.067800  406831 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-628553 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-628553 --wait=true -v=7 --alsologtostderr
E1007 12:16:42.463162  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:17:10.164820  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:20:01.381445  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:21:42.463120  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-628553 --wait=true -v=7 --alsologtostderr: (7m53.474129012s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-628553
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-628553 -n ha-628553
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-628553 logs -n 25: (1.876710325s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m02:/home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m04 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp testdata/cp-test.txt                                                | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553:/home/docker/cp-test_ha-628553-m04_ha-628553.txt                       |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553 sudo cat                                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553.txt                                 |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m02:/home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03:/home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m03 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-628553 node stop m02 -v=7                                                     | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-628553 node start m02 -v=7                                                    | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-628553 -v=7                                                           | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-628553 -v=7                                                                | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-628553 --wait=true -v=7                                                    | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:16 UTC | 07 Oct 24 12:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-628553                                                                | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:16:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:16:26.123757  407433 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:16:26.123885  407433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:16:26.123894  407433 out.go:358] Setting ErrFile to fd 2...
	I1007 12:16:26.123899  407433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:16:26.124099  407433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:16:26.124687  407433 out.go:352] Setting JSON to false
	I1007 12:16:26.125704  407433 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7132,"bootTime":1728296254,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:16:26.125769  407433 start.go:139] virtualization: kvm guest
	I1007 12:16:26.128261  407433 out.go:177] * [ha-628553] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:16:26.129631  407433 notify.go:220] Checking for updates...
	I1007 12:16:26.129690  407433 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:16:26.131194  407433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:16:26.132881  407433 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:16:26.134204  407433 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:16:26.135537  407433 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:16:26.136781  407433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:16:26.138675  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:16:26.138806  407433 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:16:26.139340  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:16:26.139398  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:16:26.155992  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41965
	I1007 12:16:26.156513  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:16:26.157038  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:16:26.157059  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:16:26.157404  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:16:26.157605  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:16:26.193278  407433 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 12:16:26.194596  407433 start.go:297] selected driver: kvm2
	I1007 12:16:26.194609  407433 start.go:901] validating driver "kvm2" against &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false
efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:16:26.194734  407433 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:16:26.195065  407433 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:16:26.195142  407433 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:16:26.210263  407433 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:16:26.210923  407433 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:16:26.210980  407433 cni.go:84] Creating CNI manager for ""
	I1007 12:16:26.211057  407433 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 12:16:26.211117  407433 start.go:340] cluster config:
	{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:16:26.211269  407433 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:16:26.213096  407433 out.go:177] * Starting "ha-628553" primary control-plane node in "ha-628553" cluster
	I1007 12:16:26.214271  407433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:16:26.214313  407433 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:16:26.214324  407433 cache.go:56] Caching tarball of preloaded images
	I1007 12:16:26.214415  407433 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:16:26.214425  407433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:16:26.214536  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:16:26.214713  407433 start.go:360] acquireMachinesLock for ha-628553: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:16:26.214754  407433 start.go:364] duration metric: took 22.976µs to acquireMachinesLock for "ha-628553"
	I1007 12:16:26.214769  407433 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:16:26.214776  407433 fix.go:54] fixHost starting: 
	I1007 12:16:26.215091  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:16:26.215129  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:16:26.229648  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41911
	I1007 12:16:26.230107  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:16:26.230606  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:16:26.230627  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:16:26.230939  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:16:26.231168  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:16:26.231307  407433 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:16:26.232790  407433 fix.go:112] recreateIfNeeded on ha-628553: state=Running err=<nil>
	W1007 12:16:26.232814  407433 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:16:26.235018  407433 out.go:177] * Updating the running kvm2 "ha-628553" VM ...
	I1007 12:16:26.236377  407433 machine.go:93] provisionDockerMachine start ...
	I1007 12:16:26.236397  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:16:26.236609  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:16:26.239043  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:16:26.239559  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:16:26.239609  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:16:26.239720  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:16:26.239947  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:16:26.240108  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:16:26.240247  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:16:26.240401  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:16:26.240603  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:16:26.240614  407433 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:16:44.599314  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:16:50.679247  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:16:53.751394  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:16:59.831258  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:02.903378  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:08.983267  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:12.055350  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:18.135276  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:21.207331  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:27.287338  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:33.367275  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:36.439285  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:42.519255  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:45.591262  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:51.671323  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:54.743312  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:00.823290  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:03.895331  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:09.975294  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:13.047432  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:19.127244  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:22.199315  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:28.279347  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:31.351313  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:37.431267  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:40.503281  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:46.583296  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:49.655299  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:55.735286  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:58.807398  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:04.887363  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:07.959313  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:14.039298  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:17.111274  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:23.191254  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:26.263229  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:32.343289  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:35.415281  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:41.495251  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:44.567273  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:50.647311  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:53.719310  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:59.799336  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:02.871340  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:08.951312  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:12.023252  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:18.103270  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:21.175326  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:27.255340  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:30.327259  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:36.407258  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:39.479362  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:45.559291  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:48.631374  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:54.711275  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:57.783299  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:21:03.863249  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:21:06.935316  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:21:13.015272  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:21:16.087303  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:21:19.090010  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:21:19.090081  407433 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:21:19.090436  407433 buildroot.go:166] provisioning hostname "ha-628553"
	I1007 12:21:19.090476  407433 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:21:19.090712  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:19.092889  407433 machine.go:96] duration metric: took 4m52.856494555s to provisionDockerMachine
	I1007 12:21:19.092936  407433 fix.go:56] duration metric: took 4m52.878159598s for fixHost
	I1007 12:21:19.092942  407433 start.go:83] releasing machines lock for "ha-628553", held for 4m52.878179978s
	W1007 12:21:19.092959  407433 start.go:714] error starting host: provision: host is not running
	W1007 12:21:19.093084  407433 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1007 12:21:19.093093  407433 start.go:729] Will try again in 5 seconds ...
	I1007 12:21:24.095416  407433 start.go:360] acquireMachinesLock for ha-628553: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:21:24.095566  407433 start.go:364] duration metric: took 81.063µs to acquireMachinesLock for "ha-628553"
	I1007 12:21:24.095604  407433 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:21:24.095613  407433 fix.go:54] fixHost starting: 
	I1007 12:21:24.095992  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:21:24.096023  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:21:24.112503  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I1007 12:21:24.113085  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:21:24.113729  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:21:24.113752  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:21:24.114103  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:21:24.114310  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:24.114471  407433 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:21:24.116362  407433 fix.go:112] recreateIfNeeded on ha-628553: state=Stopped err=<nil>
	I1007 12:21:24.116387  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	W1007 12:21:24.116572  407433 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:21:24.119518  407433 out.go:177] * Restarting existing kvm2 VM for "ha-628553" ...
	I1007 12:21:24.121193  407433 main.go:141] libmachine: (ha-628553) Calling .Start
	I1007 12:21:24.121531  407433 main.go:141] libmachine: (ha-628553) Ensuring networks are active...
	I1007 12:21:24.122685  407433 main.go:141] libmachine: (ha-628553) Ensuring network default is active
	I1007 12:21:24.123229  407433 main.go:141] libmachine: (ha-628553) Ensuring network mk-ha-628553 is active
	I1007 12:21:24.123712  407433 main.go:141] libmachine: (ha-628553) Getting domain xml...
	I1007 12:21:24.124530  407433 main.go:141] libmachine: (ha-628553) Creating domain...
	I1007 12:21:25.367026  407433 main.go:141] libmachine: (ha-628553) Waiting to get IP...
	I1007 12:21:25.368097  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:25.368533  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:25.368608  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:25.368510  408877 retry.go:31] will retry after 279.419429ms: waiting for machine to come up
	I1007 12:21:25.650333  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:25.650773  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:25.650798  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:25.650724  408877 retry.go:31] will retry after 283.251799ms: waiting for machine to come up
	I1007 12:21:25.935196  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:25.935605  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:25.935630  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:25.935555  408877 retry.go:31] will retry after 476.147073ms: waiting for machine to come up
	I1007 12:21:26.413173  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:26.413522  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:26.413551  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:26.413509  408877 retry.go:31] will retry after 398.750079ms: waiting for machine to come up
	I1007 12:21:26.814134  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:26.814547  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:26.814577  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:26.814483  408877 retry.go:31] will retry after 616.527868ms: waiting for machine to come up
	I1007 12:21:27.432565  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:27.433095  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:27.433129  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:27.433033  408877 retry.go:31] will retry after 906.153026ms: waiting for machine to come up
	I1007 12:21:28.341150  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:28.341606  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:28.341641  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:28.341511  408877 retry.go:31] will retry after 1.022594433s: waiting for machine to come up
	I1007 12:21:29.366330  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:29.366748  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:29.366770  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:29.366714  408877 retry.go:31] will retry after 1.132267271s: waiting for machine to come up
	I1007 12:21:30.501161  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:30.501554  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:30.501590  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:30.501492  408877 retry.go:31] will retry after 1.319777065s: waiting for machine to come up
	I1007 12:21:31.823354  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:31.823800  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:31.823827  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:31.823748  408877 retry.go:31] will retry after 1.461219032s: waiting for machine to come up
	I1007 12:21:33.287405  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:33.287878  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:33.287908  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:33.287824  408877 retry.go:31] will retry after 2.368607456s: waiting for machine to come up
	I1007 12:21:35.658851  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:35.659296  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:35.659324  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:35.659255  408877 retry.go:31] will retry after 2.655568538s: waiting for machine to come up
	I1007 12:21:38.318268  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:38.318804  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:38.318831  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:38.318692  408877 retry.go:31] will retry after 4.033786402s: waiting for machine to come up
	I1007 12:21:42.356645  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.357140  407433 main.go:141] libmachine: (ha-628553) Found IP for machine: 192.168.39.110
	I1007 12:21:42.357166  407433 main.go:141] libmachine: (ha-628553) Reserving static IP address...
	I1007 12:21:42.357184  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has current primary IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.357629  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "ha-628553", mac: "52:54:00:7b:12:fd", ip: "192.168.39.110"} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.357662  407433 main.go:141] libmachine: (ha-628553) DBG | skip adding static IP to network mk-ha-628553 - found existing host DHCP lease matching {name: "ha-628553", mac: "52:54:00:7b:12:fd", ip: "192.168.39.110"}
	I1007 12:21:42.357678  407433 main.go:141] libmachine: (ha-628553) Reserved static IP address: 192.168.39.110
	I1007 12:21:42.357724  407433 main.go:141] libmachine: (ha-628553) Waiting for SSH to be available...
	I1007 12:21:42.357742  407433 main.go:141] libmachine: (ha-628553) DBG | Getting to WaitForSSH function...
	I1007 12:21:42.359902  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.360251  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.360271  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.360448  407433 main.go:141] libmachine: (ha-628553) DBG | Using SSH client type: external
	I1007 12:21:42.360477  407433 main.go:141] libmachine: (ha-628553) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa (-rw-------)
	I1007 12:21:42.360512  407433 main.go:141] libmachine: (ha-628553) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:21:42.360527  407433 main.go:141] libmachine: (ha-628553) DBG | About to run SSH command:
	I1007 12:21:42.360537  407433 main.go:141] libmachine: (ha-628553) DBG | exit 0
	I1007 12:21:42.483116  407433 main.go:141] libmachine: (ha-628553) DBG | SSH cmd err, output: <nil>: 
	I1007 12:21:42.483536  407433 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:21:42.484252  407433 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:21:42.486980  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.487455  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.487480  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.487844  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:21:42.488065  407433 machine.go:93] provisionDockerMachine start ...
	I1007 12:21:42.488101  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:42.488336  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:42.490571  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.490951  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.490998  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.491066  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:42.491287  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.491435  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.491558  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:42.491740  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:21:42.491981  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:21:42.491995  407433 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:21:42.591574  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:21:42.591609  407433 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:21:42.591857  407433 buildroot.go:166] provisioning hostname "ha-628553"
	I1007 12:21:42.591888  407433 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:21:42.592065  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:42.595332  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.595848  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.595878  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.596115  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:42.596310  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.596459  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.596587  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:42.596779  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:21:42.596970  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:21:42.596985  407433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553 && echo "ha-628553" | sudo tee /etc/hostname
	I1007 12:21:42.715355  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553
	
	I1007 12:21:42.715386  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:42.718394  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.718755  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.718789  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.718953  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:42.719149  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.719306  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.719395  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:42.719539  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:21:42.719757  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:21:42.719774  407433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:21:42.829073  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:21:42.829128  407433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:21:42.829150  407433 buildroot.go:174] setting up certificates
	I1007 12:21:42.829164  407433 provision.go:84] configureAuth start
	I1007 12:21:42.829182  407433 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:21:42.829513  407433 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:21:42.832451  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.832765  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.832789  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.833001  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:42.835330  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.835639  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.835666  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.835776  407433 provision.go:143] copyHostCerts
	I1007 12:21:42.835829  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:21:42.835898  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:21:42.835919  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:21:42.835999  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:21:42.836099  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:21:42.836120  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:21:42.836128  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:21:42.836155  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:21:42.836210  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:21:42.836229  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:21:42.836235  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:21:42.836258  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:21:42.836323  407433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553 san=[127.0.0.1 192.168.39.110 ha-628553 localhost minikube]
	I1007 12:21:42.909733  407433 provision.go:177] copyRemoteCerts
	I1007 12:21:42.909804  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:21:42.909830  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:42.912711  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.913150  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.913179  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.913345  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:42.913555  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.913751  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:42.913894  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:21:42.993885  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:21:42.993979  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1007 12:21:43.019522  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:21:43.019599  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:21:43.045619  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:21:43.045708  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:21:43.071015  407433 provision.go:87] duration metric: took 241.830335ms to configureAuth
	I1007 12:21:43.071046  407433 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:21:43.071275  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:21:43.071355  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.074346  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.074687  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.074714  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.074864  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.075099  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.075285  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.075454  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.075642  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:21:43.075864  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:21:43.075882  407433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:21:43.302697  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:21:43.302739  407433 machine.go:96] duration metric: took 814.660374ms to provisionDockerMachine
	I1007 12:21:43.302758  407433 start.go:293] postStartSetup for "ha-628553" (driver="kvm2")
	I1007 12:21:43.302773  407433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:21:43.302794  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.303209  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:21:43.303247  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.305797  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.306200  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.306254  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.306414  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.306669  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.306846  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.307097  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:21:43.386822  407433 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:21:43.391301  407433 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:21:43.391356  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:21:43.391439  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:21:43.391522  407433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:21:43.391541  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:21:43.391629  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:21:43.401523  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:21:43.427243  407433 start.go:296] duration metric: took 124.440543ms for postStartSetup
	I1007 12:21:43.427311  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.427678  407433 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 12:21:43.427715  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.430704  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.431295  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.431322  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.431525  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.431734  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.431921  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.432070  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:21:43.514791  407433 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1007 12:21:43.514878  407433 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1007 12:21:43.555143  407433 fix.go:56] duration metric: took 19.459516578s for fixHost
	I1007 12:21:43.555213  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.558511  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.558893  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.558934  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.559133  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.559345  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.559558  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.559700  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.559877  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:21:43.560073  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:21:43.560086  407433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:21:43.664129  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728303703.629612558
	
	I1007 12:21:43.664163  407433 fix.go:216] guest clock: 1728303703.629612558
	I1007 12:21:43.664176  407433 fix.go:229] Guest: 2024-10-07 12:21:43.629612558 +0000 UTC Remote: 2024-10-07 12:21:43.5551888 +0000 UTC m=+317.472624770 (delta=74.423758ms)
	I1007 12:21:43.664203  407433 fix.go:200] guest clock delta is within tolerance: 74.423758ms
	I1007 12:21:43.664209  407433 start.go:83] releasing machines lock for "ha-628553", held for 19.56863138s
	I1007 12:21:43.664247  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.664531  407433 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:21:43.667342  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.667692  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.667713  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.667926  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.668513  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.668738  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.668823  407433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:21:43.668885  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.668992  407433 ssh_runner.go:195] Run: cat /version.json
	I1007 12:21:43.669019  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.671881  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.672069  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.672323  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.672347  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.672508  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.672540  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.672558  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.672775  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.672782  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.672993  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.673037  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.673154  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.673173  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:21:43.673313  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:21:43.773521  407433 ssh_runner.go:195] Run: systemctl --version
	I1007 12:21:43.779770  407433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:21:43.923129  407433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:21:43.929378  407433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:21:43.929478  407433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:21:43.947124  407433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:21:43.947158  407433 start.go:495] detecting cgroup driver to use...
	I1007 12:21:43.947250  407433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:21:43.968850  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:21:43.983128  407433 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:21:43.983187  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:21:43.998922  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:21:44.013767  407433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:21:44.131824  407433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:21:44.307748  407433 docker.go:233] disabling docker service ...
	I1007 12:21:44.307813  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:21:44.322761  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:21:44.336261  407433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:21:44.455668  407433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:21:44.573473  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:21:44.588200  407433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:21:44.609001  407433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:21:44.609108  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.620005  407433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:21:44.620097  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.631644  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.642816  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.654321  407433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:21:44.665685  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.676944  407433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.695174  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.706235  407433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:21:44.716588  407433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:21:44.716660  407433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:21:44.730452  407433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:21:44.740676  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:21:44.871591  407433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:21:44.976983  407433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:21:44.977064  407433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:21:44.982348  407433 start.go:563] Will wait 60s for crictl version
	I1007 12:21:44.982414  407433 ssh_runner.go:195] Run: which crictl
	I1007 12:21:44.986177  407433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:21:45.026688  407433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:21:45.026772  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:21:45.056385  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:21:45.089059  407433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:21:45.090356  407433 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:21:45.092940  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:45.093302  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:45.093327  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:45.093547  407433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:21:45.098195  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:21:45.112382  407433 kubeadm.go:883] updating cluster {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:21:45.112579  407433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:21:45.112630  407433 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:21:45.157388  407433 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:21:45.157470  407433 ssh_runner.go:195] Run: which lz4
	I1007 12:21:45.161737  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 12:21:45.161869  407433 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:21:45.166514  407433 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:21:45.166551  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:21:46.605371  407433 crio.go:462] duration metric: took 1.443545276s to copy over tarball
	I1007 12:21:46.605453  407433 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:21:48.644174  407433 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.038668789s)
	I1007 12:21:48.644223  407433 crio.go:469] duration metric: took 2.038822202s to extract the tarball
	I1007 12:21:48.644232  407433 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:21:48.681627  407433 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:21:48.729709  407433 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:21:48.729745  407433 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:21:48.729755  407433 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.31.1 crio true true} ...
	I1007 12:21:48.729876  407433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:21:48.729949  407433 ssh_runner.go:195] Run: crio config
	I1007 12:21:48.777864  407433 cni.go:84] Creating CNI manager for ""
	I1007 12:21:48.777889  407433 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 12:21:48.777900  407433 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:21:48.777927  407433 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-628553 NodeName:ha-628553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:21:48.778139  407433 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-628553"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:21:48.778167  407433 kube-vip.go:115] generating kube-vip config ...
	I1007 12:21:48.778226  407433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:21:48.794550  407433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:21:48.794658  407433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:21:48.794711  407433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:21:48.804548  407433 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:21:48.804616  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:21:48.814049  407433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:21:48.830950  407433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:21:48.847474  407433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:21:48.864374  407433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:21:48.881516  407433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:21:48.885417  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:21:48.897733  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:21:49.015861  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:21:49.033974  407433 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.110
	I1007 12:21:49.033999  407433 certs.go:194] generating shared ca certs ...
	I1007 12:21:49.034021  407433 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.034242  407433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:21:49.034299  407433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:21:49.034315  407433 certs.go:256] generating profile certs ...
	I1007 12:21:49.034456  407433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:21:49.034493  407433 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.eb58c645
	I1007 12:21:49.034513  407433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.eb58c645 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.149 192.168.39.254]
	I1007 12:21:49.325201  407433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.eb58c645 ...
	I1007 12:21:49.325236  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.eb58c645: {Name:mk52b692a291609d28023b2e669acc8c5036935a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.325440  407433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.eb58c645 ...
	I1007 12:21:49.325458  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.eb58c645: {Name:mk459cf1eb91311870c17fc9cbea0da8e2941bb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.325562  407433 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.eb58c645 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:21:49.325744  407433 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.eb58c645 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:21:49.325888  407433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:21:49.325906  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:21:49.325919  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:21:49.325930  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:21:49.325941  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:21:49.325954  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:21:49.325974  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:21:49.325985  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:21:49.325997  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:21:49.326051  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:21:49.326085  407433 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:21:49.326095  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:21:49.326116  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:21:49.326137  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:21:49.326159  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:21:49.326194  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:21:49.326227  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:21:49.326242  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:21:49.326254  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:21:49.326903  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:21:49.358627  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:21:49.384872  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:21:49.410247  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:21:49.449977  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 12:21:49.476006  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:21:49.502461  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:21:49.527930  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:21:49.554264  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:21:49.579161  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:21:49.604114  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:21:49.627763  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:21:49.646019  407433 ssh_runner.go:195] Run: openssl version
	I1007 12:21:49.652289  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:21:49.665149  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:21:49.670047  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:21:49.670122  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:21:49.676216  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:21:49.689578  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:21:49.702388  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:21:49.707134  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:21:49.707210  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:21:49.713294  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:21:49.726297  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:21:49.740242  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:21:49.745098  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:21:49.745158  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:21:49.751364  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:21:49.764372  407433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:21:49.769743  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:21:49.776032  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:21:49.782996  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:21:49.789785  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:21:49.796036  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:21:49.801890  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:21:49.807904  407433 kubeadm.go:392] StartCluster: {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:21:49.808044  407433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:21:49.808092  407433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:21:49.848170  407433 cri.go:89] found id: ""
	I1007 12:21:49.848266  407433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:21:49.859183  407433 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 12:21:49.859210  407433 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 12:21:49.859307  407433 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 12:21:49.870460  407433 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 12:21:49.871075  407433 kubeconfig.go:125] found "ha-628553" server: "https://192.168.39.254:8443"
	I1007 12:21:49.871114  407433 kubeconfig.go:47] verify endpoint returned: got: 192.168.39.254:8443, want: 192.168.39.110:8443
	I1007 12:21:49.871485  407433 kubeconfig.go:62] /home/jenkins/minikube-integration/19763-377026/kubeconfig needs updating (will repair): [kubeconfig needs server address update]
	I1007 12:21:49.871770  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.872195  407433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:21:49.872502  407433 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.110:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:21:49.872973  407433 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 12:21:49.873226  407433 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 12:21:49.883822  407433 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I1007 12:21:49.883853  407433 kubeadm.go:597] duration metric: took 24.635347ms to restartPrimaryControlPlane
	I1007 12:21:49.883865  407433 kubeadm.go:394] duration metric: took 75.972126ms to StartCluster
	I1007 12:21:49.883888  407433 settings.go:142] acquiring lock: {Name:mk1ff033f29b570679652ae5ee30e0799b0658dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.883981  407433 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:21:49.884584  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.884832  407433 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:21:49.884857  407433 start.go:241] waiting for startup goroutines ...
	I1007 12:21:49.884866  407433 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:21:49.885049  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:21:49.887198  407433 out.go:177] * Enabled addons: 
	I1007 12:21:49.888602  407433 addons.go:510] duration metric: took 3.73375ms for enable addons: enabled=[]
	I1007 12:21:49.888640  407433 start.go:246] waiting for cluster config update ...
	I1007 12:21:49.888652  407433 start.go:255] writing updated cluster config ...
	I1007 12:21:49.890380  407433 out.go:201] 
	I1007 12:21:49.892044  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:21:49.892193  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:21:49.893863  407433 out.go:177] * Starting "ha-628553-m02" control-plane node in "ha-628553" cluster
	I1007 12:21:49.895054  407433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:21:49.895079  407433 cache.go:56] Caching tarball of preloaded images
	I1007 12:21:49.895179  407433 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:21:49.895193  407433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:21:49.895302  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:21:49.895485  407433 start.go:360] acquireMachinesLock for ha-628553-m02: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:21:49.895560  407433 start.go:364] duration metric: took 38.924µs to acquireMachinesLock for "ha-628553-m02"
	I1007 12:21:49.895582  407433 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:21:49.895589  407433 fix.go:54] fixHost starting: m02
	I1007 12:21:49.895875  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:21:49.895903  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:21:49.911264  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I1007 12:21:49.911760  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:21:49.912290  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:21:49.912312  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:21:49.912642  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:21:49.912822  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:21:49.912974  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetState
	I1007 12:21:49.914562  407433 fix.go:112] recreateIfNeeded on ha-628553-m02: state=Stopped err=<nil>
	I1007 12:21:49.914585  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	W1007 12:21:49.914751  407433 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:21:49.916531  407433 out.go:177] * Restarting existing kvm2 VM for "ha-628553-m02" ...
	I1007 12:21:49.917652  407433 main.go:141] libmachine: (ha-628553-m02) Calling .Start
	I1007 12:21:49.917805  407433 main.go:141] libmachine: (ha-628553-m02) Ensuring networks are active...
	I1007 12:21:49.918580  407433 main.go:141] libmachine: (ha-628553-m02) Ensuring network default is active
	I1007 12:21:49.918949  407433 main.go:141] libmachine: (ha-628553-m02) Ensuring network mk-ha-628553 is active
	I1007 12:21:49.919375  407433 main.go:141] libmachine: (ha-628553-m02) Getting domain xml...
	I1007 12:21:49.920033  407433 main.go:141] libmachine: (ha-628553-m02) Creating domain...
	I1007 12:21:51.235971  407433 main.go:141] libmachine: (ha-628553-m02) Waiting to get IP...
	I1007 12:21:51.237030  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:51.237526  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:51.237632  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:51.237515  409044 retry.go:31] will retry after 208.460483ms: waiting for machine to come up
	I1007 12:21:51.448245  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:51.448690  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:51.448739  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:51.448638  409044 retry.go:31] will retry after 314.033838ms: waiting for machine to come up
	I1007 12:21:51.764102  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:51.764559  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:51.764592  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:51.764510  409044 retry.go:31] will retry after 314.49319ms: waiting for machine to come up
	I1007 12:21:52.081111  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:52.081669  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:52.081702  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:52.081620  409044 retry.go:31] will retry after 607.201266ms: waiting for machine to come up
	I1007 12:21:52.690434  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:52.690884  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:52.690914  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:52.690843  409044 retry.go:31] will retry after 566.633148ms: waiting for machine to come up
	I1007 12:21:53.258616  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:53.259044  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:53.259067  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:53.259013  409044 retry.go:31] will retry after 586.73854ms: waiting for machine to come up
	I1007 12:21:53.847808  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:53.848191  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:53.848219  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:53.848137  409044 retry.go:31] will retry after 735.539748ms: waiting for machine to come up
	I1007 12:21:54.585005  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:54.585437  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:54.585466  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:54.585387  409044 retry.go:31] will retry after 1.240571246s: waiting for machine to come up
	I1007 12:21:55.827051  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:55.827539  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:55.827568  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:55.827489  409044 retry.go:31] will retry after 1.305114745s: waiting for machine to come up
	I1007 12:21:57.133879  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:57.134360  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:57.134385  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:57.134302  409044 retry.go:31] will retry after 1.972744404s: waiting for machine to come up
	I1007 12:21:59.109841  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:59.110349  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:59.110386  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:59.110283  409044 retry.go:31] will retry after 2.038392713s: waiting for machine to come up
	I1007 12:22:01.151126  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:01.151707  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:22:01.151742  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:22:01.151646  409044 retry.go:31] will retry after 2.812494777s: waiting for machine to come up
	I1007 12:22:03.967985  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:03.968480  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:22:03.968513  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:22:03.968409  409044 retry.go:31] will retry after 4.415302249s: waiting for machine to come up
	I1007 12:22:08.387856  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.388271  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has current primary IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.388292  407433 main.go:141] libmachine: (ha-628553-m02) Found IP for machine: 192.168.39.169
	I1007 12:22:08.388304  407433 main.go:141] libmachine: (ha-628553-m02) Reserving static IP address...
	I1007 12:22:08.388775  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "ha-628553-m02", mac: "52:54:00:59:4a:2e", ip: "192.168.39.169"} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.388803  407433 main.go:141] libmachine: (ha-628553-m02) Reserved static IP address: 192.168.39.169
	I1007 12:22:08.388822  407433 main.go:141] libmachine: (ha-628553-m02) DBG | skip adding static IP to network mk-ha-628553 - found existing host DHCP lease matching {name: "ha-628553-m02", mac: "52:54:00:59:4a:2e", ip: "192.168.39.169"}
	I1007 12:22:08.388840  407433 main.go:141] libmachine: (ha-628553-m02) DBG | Getting to WaitForSSH function...
	I1007 12:22:08.388851  407433 main.go:141] libmachine: (ha-628553-m02) Waiting for SSH to be available...
	I1007 12:22:08.391251  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.391741  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.391772  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.391911  407433 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH client type: external
	I1007 12:22:08.391956  407433 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa (-rw-------)
	I1007 12:22:08.391990  407433 main.go:141] libmachine: (ha-628553-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:22:08.392006  407433 main.go:141] libmachine: (ha-628553-m02) DBG | About to run SSH command:
	I1007 12:22:08.392017  407433 main.go:141] libmachine: (ha-628553-m02) DBG | exit 0
	I1007 12:22:08.519218  407433 main.go:141] libmachine: (ha-628553-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 12:22:08.519627  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:22:08.520267  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:22:08.523654  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.524166  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.524196  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.524529  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:22:08.524763  407433 machine.go:93] provisionDockerMachine start ...
	I1007 12:22:08.524782  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:08.525039  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:08.527442  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.527883  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.527913  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.528056  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:08.528266  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.528420  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.528566  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:08.528726  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:22:08.528904  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:22:08.528914  407433 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:22:08.635508  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:22:08.635537  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:22:08.635764  407433 buildroot.go:166] provisioning hostname "ha-628553-m02"
	I1007 12:22:08.635794  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:22:08.635985  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:08.638435  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.638821  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.638858  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.639006  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:08.639220  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.639380  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.639600  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:08.639843  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:22:08.640069  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:22:08.640085  407433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m02 && echo "ha-628553-m02" | sudo tee /etc/hostname
	I1007 12:22:08.760610  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m02
	
	I1007 12:22:08.760648  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:08.763799  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.764196  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.764235  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.764430  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:08.764649  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.764831  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.764927  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:08.765087  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:22:08.765280  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:22:08.765295  407433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:22:08.881564  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:22:08.881622  407433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:22:08.881648  407433 buildroot.go:174] setting up certificates
	I1007 12:22:08.881664  407433 provision.go:84] configureAuth start
	I1007 12:22:08.881683  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:22:08.882018  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:22:08.884802  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.885191  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.885210  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.885458  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:08.887773  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.888162  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.888194  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.888325  407433 provision.go:143] copyHostCerts
	I1007 12:22:08.888363  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:22:08.888416  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:22:08.888425  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:22:08.888483  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:22:08.888569  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:22:08.888587  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:22:08.888597  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:22:08.888619  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:22:08.888671  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:22:08.888688  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:22:08.888694  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:22:08.888710  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:22:08.888771  407433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m02 san=[127.0.0.1 192.168.39.169 ha-628553-m02 localhost minikube]
	I1007 12:22:08.990424  407433 provision.go:177] copyRemoteCerts
	I1007 12:22:08.990490  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:22:08.990518  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:08.993619  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.994005  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.994040  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.994292  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:08.994527  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.994745  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:08.994894  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:22:09.077614  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:22:09.077727  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:22:09.103137  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:22:09.103230  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:22:09.129200  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:22:09.129295  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:22:09.154191  407433 provision.go:87] duration metric: took 272.509247ms to configureAuth
	I1007 12:22:09.154229  407433 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:22:09.154471  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:22:09.154586  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:09.157664  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.158116  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.158150  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.158338  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.158597  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.158797  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.158995  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.159186  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:22:09.159390  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:22:09.159411  407433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:22:09.381237  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:22:09.381276  407433 machine.go:96] duration metric: took 856.499638ms to provisionDockerMachine
	I1007 12:22:09.381296  407433 start.go:293] postStartSetup for "ha-628553-m02" (driver="kvm2")
	I1007 12:22:09.381312  407433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:22:09.381350  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.381697  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:22:09.381736  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:09.384327  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.384689  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.384719  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.384871  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.385068  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.385208  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.385332  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:22:09.469861  407433 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:22:09.474347  407433 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:22:09.474371  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:22:09.474447  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:22:09.474534  407433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:22:09.474548  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:22:09.474660  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:22:09.484037  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:22:09.510005  407433 start.go:296] duration metric: took 128.687734ms for postStartSetup
	I1007 12:22:09.510070  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.510493  407433 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 12:22:09.510523  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:09.513232  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.513602  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.513626  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.513760  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.513960  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.514147  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.514331  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:22:09.597780  407433 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1007 12:22:09.597864  407433 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1007 12:22:09.658410  407433 fix.go:56] duration metric: took 19.762812976s for fixHost
	I1007 12:22:09.658470  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:09.661500  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.661951  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.661983  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.662211  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.662450  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.662639  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.662813  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.662999  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:22:09.663221  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:22:09.663232  407433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:22:09.776043  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728303729.747670422
	
	I1007 12:22:09.776070  407433 fix.go:216] guest clock: 1728303729.747670422
	I1007 12:22:09.776080  407433 fix.go:229] Guest: 2024-10-07 12:22:09.747670422 +0000 UTC Remote: 2024-10-07 12:22:09.658444939 +0000 UTC m=+343.575880939 (delta=89.225483ms)
	I1007 12:22:09.776103  407433 fix.go:200] guest clock delta is within tolerance: 89.225483ms
	I1007 12:22:09.776111  407433 start.go:83] releasing machines lock for "ha-628553-m02", held for 19.880537818s
	I1007 12:22:09.776138  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.776434  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:22:09.779169  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.779579  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.779606  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.782206  407433 out.go:177] * Found network options:
	I1007 12:22:09.783789  407433 out.go:177]   - NO_PROXY=192.168.39.110
	W1007 12:22:09.785051  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:22:09.785086  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.785678  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.785903  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.786013  407433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:22:09.786055  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	W1007 12:22:09.786140  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:22:09.786221  407433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:22:09.786242  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:09.788838  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.788959  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.789279  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.789313  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.789336  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.789375  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.789481  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.789646  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.789745  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.789827  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.789895  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.789957  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:22:09.790016  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.790109  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:22:10.010031  407433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:22:10.017030  407433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:22:10.017123  407433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:22:10.033689  407433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:22:10.033718  407433 start.go:495] detecting cgroup driver to use...
	I1007 12:22:10.033781  407433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:22:10.054449  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:22:10.069446  407433 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:22:10.069527  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:22:10.083996  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:22:10.098610  407433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:22:10.219232  407433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:22:10.373843  407433 docker.go:233] disabling docker service ...
	I1007 12:22:10.373933  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:22:10.388851  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:22:10.403086  407433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:22:10.540209  407433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:22:10.675669  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:22:10.690384  407433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:22:10.709546  407433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:22:10.709623  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.720116  407433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:22:10.720190  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.730739  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.741524  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.752457  407433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:22:10.764013  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.775511  407433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.794371  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.805510  407433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:22:10.815537  407433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:22:10.815603  407433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:22:10.831057  407433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:22:10.841520  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:22:10.968877  407433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:22:11.075263  407433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:22:11.075357  407433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:22:11.081175  407433 start.go:563] Will wait 60s for crictl version
	I1007 12:22:11.081242  407433 ssh_runner.go:195] Run: which crictl
	I1007 12:22:11.085171  407433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:22:11.133160  407433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:22:11.133271  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:22:11.164197  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:22:11.194713  407433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:22:11.196334  407433 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:22:11.197764  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:22:11.200441  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:11.200851  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:11.200877  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:11.201089  407433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:22:11.205514  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:22:11.218658  407433 mustload.go:65] Loading cluster: ha-628553
	I1007 12:22:11.218947  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:22:11.219414  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:22:11.219475  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:22:11.235738  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45321
	I1007 12:22:11.236267  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:22:11.236782  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:22:11.236806  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:22:11.237191  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:22:11.237368  407433 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:22:11.238911  407433 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:22:11.239272  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:22:11.239313  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:22:11.255263  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43133
	I1007 12:22:11.255795  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:22:11.256322  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:22:11.256339  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:22:11.256727  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:22:11.256985  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:22:11.257164  407433 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.169
	I1007 12:22:11.257178  407433 certs.go:194] generating shared ca certs ...
	I1007 12:22:11.257195  407433 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:22:11.257355  407433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:22:11.257399  407433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:22:11.257413  407433 certs.go:256] generating profile certs ...
	I1007 12:22:11.257495  407433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:22:11.257524  407433 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.043910f8
	I1007 12:22:11.257542  407433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.043910f8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.149 192.168.39.254]
	I1007 12:22:11.376262  407433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.043910f8 ...
	I1007 12:22:11.376304  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.043910f8: {Name:mkad116b0a0bd32720c3eed0fa14324438815f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:22:11.376541  407433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.043910f8 ...
	I1007 12:22:11.376562  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.043910f8: {Name:mk2a2a5bce258a22a7eaf81de7b6217966a2d787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:22:11.376684  407433 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.043910f8 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:22:11.376854  407433 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.043910f8 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:22:11.377018  407433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:22:11.377044  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:22:11.377059  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:22:11.377076  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:22:11.377092  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:22:11.377110  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:22:11.377125  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:22:11.377140  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:22:11.377155  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:22:11.377229  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:22:11.377263  407433 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:22:11.377275  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:22:11.377300  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:22:11.377328  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:22:11.377352  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:22:11.377397  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:22:11.377426  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:22:11.377443  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:22:11.377461  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:22:11.377501  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:22:11.381059  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:22:11.381601  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:22:11.381633  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:22:11.381836  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:22:11.382043  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:22:11.382222  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:22:11.382390  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:22:11.455503  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:22:11.461136  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:22:11.473653  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:22:11.478135  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:22:11.489946  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:22:11.494843  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:22:11.506268  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:22:11.510909  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:22:11.522605  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:22:11.527274  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:22:11.538497  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:22:11.543624  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:22:11.555363  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:22:11.583578  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:22:11.611552  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:22:11.636924  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:22:11.661790  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:22:11.685935  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:22:11.711710  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:22:11.737160  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:22:11.762304  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:22:11.786539  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:22:11.810685  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:22:11.833875  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:22:11.851451  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:22:11.868954  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:22:11.886423  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:22:11.905393  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:22:11.923875  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:22:11.941676  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:22:11.959844  407433 ssh_runner.go:195] Run: openssl version
	I1007 12:22:11.966344  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:22:11.978502  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:22:11.983776  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:22:11.983853  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:22:11.990297  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:22:12.002305  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:22:12.014110  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:22:12.019086  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:22:12.019159  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:22:12.025335  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:22:12.036758  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:22:12.048355  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:22:12.053412  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:22:12.053492  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:22:12.059272  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:22:12.070758  407433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:22:12.075791  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:22:12.082720  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:22:12.089301  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:22:12.096107  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:22:12.102517  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:22:12.109098  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:22:12.115468  407433 kubeadm.go:934] updating node {m02 192.168.39.169 8443 v1.31.1 crio true true} ...
	I1007 12:22:12.115576  407433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:22:12.115603  407433 kube-vip.go:115] generating kube-vip config ...
	I1007 12:22:12.115644  407433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:22:12.132912  407433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:22:12.132991  407433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:22:12.133043  407433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:22:12.145165  407433 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:22:12.145248  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:22:12.156404  407433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:22:12.175104  407433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:22:12.193469  407433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:22:12.212159  407433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:22:12.216422  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:22:12.231433  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:22:12.363053  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:22:12.381011  407433 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:22:12.381343  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:22:12.383401  407433 out.go:177] * Verifying Kubernetes components...
	I1007 12:22:12.384773  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:22:12.532916  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:22:12.553180  407433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:22:12.553550  407433 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:22:12.553656  407433 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:22:12.553972  407433 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:22:12.554196  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:12.554220  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:12.554232  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:12.554239  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:18.301031  407433 round_trippers.go:574] Response Status:  in 5746 milliseconds
	I1007 12:22:19.302022  407433 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:19.302092  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:19.302103  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:19.302116  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:19.302126  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:25.882940  407433 round_trippers.go:574] Response Status: 200 OK in 6580 milliseconds
	I1007 12:22:25.884692  407433 node_ready.go:53] node "ha-628553-m02" has status "Ready":"Unknown"
	I1007 12:22:25.884831  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:25.884847  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:25.884860  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:25.884935  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:25.982595  407433 round_trippers.go:574] Response Status: 200 OK in 97 milliseconds
	I1007 12:22:26.054899  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:26.054921  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:26.054930  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:26.054933  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:26.059560  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:26.555014  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:26.555045  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:26.555057  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:26.555065  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:26.559206  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:27.054792  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:27.054816  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:27.054824  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:27.054827  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:27.061739  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:22:27.555292  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:27.555325  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:27.555337  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:27.555347  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:27.563215  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:28.055264  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:28.055291  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.055301  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.055304  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.059542  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:28.060267  407433 node_ready.go:49] node "ha-628553-m02" has status "Ready":"True"
	I1007 12:22:28.060290  407433 node_ready.go:38] duration metric: took 15.506272507s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:22:28.060301  407433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:22:28.060373  407433 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:22:28.060386  407433 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:22:28.060447  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:22:28.060454  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.060462  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.060469  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.077200  407433 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1007 12:22:28.090257  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.090388  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:22:28.090399  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.090410  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.090415  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.123364  407433 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I1007 12:22:28.124275  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:28.124299  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.124310  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.124318  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.132653  407433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:22:28.133675  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:28.133696  407433 pod_ready.go:82] duration metric: took 43.399563ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.133706  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.133777  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:22:28.133784  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.133792  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.133796  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.139346  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:28.140092  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:28.140116  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.140128  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.140134  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.149768  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:22:28.150490  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:28.150518  407433 pod_ready.go:82] duration metric: took 16.804436ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.150534  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.150635  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:22:28.150648  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.150659  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.150665  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.167054  407433 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1007 12:22:28.168537  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:28.168564  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.168576  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.168597  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.196276  407433 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1007 12:22:28.196944  407433 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:28.196972  407433 pod_ready.go:82] duration metric: took 46.431838ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.196983  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.197072  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:28.197086  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.197095  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.197098  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.206052  407433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:22:28.206699  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:28.206720  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.206730  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.206735  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.214511  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:28.697370  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:28.697406  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.697424  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.697428  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.842951  407433 round_trippers.go:574] Response Status: 200 OK in 145 milliseconds
	I1007 12:22:28.845278  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:28.845303  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.845315  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.845322  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.854353  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:22:29.198165  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:29.198189  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:29.198198  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:29.198201  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:29.203175  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:29.205182  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:29.205209  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:29.205218  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:29.205223  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:29.210652  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:29.697669  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:29.697700  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:29.697713  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:29.697732  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:29.705615  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:29.706469  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:29.706492  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:29.706504  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:29.706511  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:29.715103  407433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:22:30.197960  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:30.197991  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:30.198004  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:30.198010  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:30.203140  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:30.204062  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:30.204080  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:30.204089  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:30.204096  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:30.222521  407433 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1007 12:22:30.223086  407433 pod_ready.go:103] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:30.697299  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:30.697334  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:30.697344  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:30.697347  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:30.701257  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:30.702061  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:30.702082  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:30.702094  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:30.702102  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:30.706938  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:31.198212  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:31.198243  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:31.198267  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:31.198274  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:31.203993  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:31.205091  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:31.205109  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:31.205118  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:31.205122  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:31.209691  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:31.697401  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:31.697433  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:31.697444  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:31.697451  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:31.701199  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:31.702210  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:31.702233  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:31.702246  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:31.702251  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:31.705418  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:32.198058  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:32.198116  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:32.198130  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:32.198136  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:32.203674  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:32.204616  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:32.204641  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:32.204653  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:32.204657  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:32.208476  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:32.697495  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:32.697522  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:32.697532  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:32.697538  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:32.702340  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:32.703196  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:32.703221  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:32.703235  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:32.703241  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:32.705999  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:32.706524  407433 pod_ready.go:103] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:33.198037  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:33.198069  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:33.198080  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:33.198084  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:33.203780  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:33.204636  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:33.204657  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:33.204669  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:33.204675  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:33.208214  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:33.697605  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:33.697632  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:33.697645  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:33.697650  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:33.701142  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:33.702088  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:33.702118  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:33.702131  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:33.702138  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:33.705122  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:34.197580  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:34.197604  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:34.197613  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:34.197619  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:34.201483  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:34.202164  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:34.202184  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:34.202195  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:34.202199  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:34.206539  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:34.697531  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:34.697559  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:34.697572  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:34.697580  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:34.702160  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:34.702943  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:34.702974  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:34.702987  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:34.702996  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:34.707938  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:34.708417  407433 pod_ready.go:103] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:35.197280  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:35.197306  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:35.197317  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:35.197322  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:35.200415  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:35.201519  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:35.201541  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:35.201553  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:35.201558  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:35.206937  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:35.697771  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:35.697814  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:35.697822  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:35.697826  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:35.700986  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:35.701912  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:35.701931  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:35.701942  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:35.701948  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:35.705346  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:36.197982  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:36.198006  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.198016  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.198020  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.201971  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:36.202642  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:36.202662  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.202672  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.202678  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.209784  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:36.697343  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:36.697370  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.697382  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.697389  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.707635  407433 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1007 12:22:36.708226  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:36.708244  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.708252  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.708256  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.717789  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:22:36.718189  407433 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:36.718208  407433 pod_ready.go:82] duration metric: took 8.521218499s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.718219  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.718296  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:22:36.718304  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.718312  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.718317  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.729522  407433 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:22:36.730246  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:36.730268  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.730279  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.730285  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.739861  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:22:36.740433  407433 pod_ready.go:93] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:36.740455  407433 pod_ready.go:82] duration metric: took 22.228703ms for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.740488  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.740589  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:22:36.740601  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.740612  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.740618  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.745235  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:36.746010  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:36.746027  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.746038  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.746044  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.755362  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:22:36.755982  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:36.756004  407433 pod_ready.go:82] duration metric: took 15.502576ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.756018  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.756088  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:36.756098  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.756109  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.756119  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.762638  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:22:36.763304  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:36.763320  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.763332  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.763338  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.769260  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:37.257047  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:37.257073  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:37.257082  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:37.257088  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:37.261258  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:37.262344  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:37.262362  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:37.262374  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:37.262381  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:37.265528  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:37.756316  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:37.756348  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:37.756357  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:37.756380  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:37.759959  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:37.760592  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:37.760608  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:37.760619  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:37.760626  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:37.763881  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:38.256736  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:38.256764  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:38.256772  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:38.256776  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:38.260822  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:38.261748  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:38.261773  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:38.261784  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:38.261790  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:38.264792  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:38.756573  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:38.756600  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:38.756608  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:38.756613  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:38.760580  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:38.761226  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:38.761243  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:38.761253  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:38.761258  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:38.764495  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:38.765069  407433 pod_ready.go:103] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:39.257548  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:39.257582  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:39.257596  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:39.257604  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:39.262371  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:39.263141  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:39.263157  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:39.263165  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:39.263168  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:39.265558  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:39.756414  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:39.756444  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:39.756453  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:39.756456  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:39.760294  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:39.761197  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:39.761224  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:39.761237  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:39.761244  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:39.764637  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:40.256227  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:40.256262  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:40.256270  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:40.256275  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:40.259556  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:40.260322  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:40.260342  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:40.260351  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:40.260355  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:40.263371  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:40.756372  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:40.756396  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:40.756403  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:40.756408  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:40.771335  407433 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1007 12:22:40.772006  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:40.772022  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:40.772030  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:40.772033  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:40.777535  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:40.777965  407433 pod_ready.go:103] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:41.256735  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:41.256767  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:41.256780  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:41.256788  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:41.273883  407433 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1007 12:22:41.280920  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:41.280948  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:41.280960  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:41.280967  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:41.292828  407433 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:22:41.757198  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:41.757222  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:41.757231  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:41.757236  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:41.770337  407433 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1007 12:22:41.771472  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:41.771494  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:41.771506  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:41.771514  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:41.779489  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:42.257216  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:42.257243  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.257252  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.257262  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.261915  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:42.262739  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:42.262763  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.262774  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.262781  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.266222  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:42.757106  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:42.757127  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.757137  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.757142  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.762377  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:42.763101  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:42.763119  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.763131  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.763136  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.770630  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:42.771096  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:42.771122  407433 pod_ready.go:82] duration metric: took 6.015095455s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.771133  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.771216  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:22:42.771226  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.771237  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.771244  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.778181  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:22:42.779565  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:42.779581  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.779591  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.779603  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.782316  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.782811  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:42.782829  407433 pod_ready.go:82] duration metric: took 11.687925ms for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.782843  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.782911  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:22:42.782920  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.782930  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.782937  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.785570  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.786923  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:42.786941  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.786952  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.786975  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.789441  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.789946  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:42.789965  407433 pod_ready.go:82] duration metric: took 7.11467ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.789979  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.790058  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:22:42.790069  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.790079  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.790088  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.792899  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.793562  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:42.793576  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.793584  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.793588  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.796927  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:42.797476  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:42.797498  407433 pod_ready.go:82] duration metric: took 7.503676ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.797511  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.797567  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:22:42.797574  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.797581  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.797587  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.800148  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.800742  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:42.800754  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.800762  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.800765  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.803727  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.804448  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:42.804467  407433 pod_ready.go:82] duration metric: took 6.948065ms for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.804481  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.957963  407433 request.go:632] Waited for 153.368794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:22:42.958055  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:22:42.958060  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.958069  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.958078  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.961567  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.157800  407433 request.go:632] Waited for 195.378331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:43.157878  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:43.157892  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:43.157903  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:43.157914  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:43.161601  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.162219  407433 pod_ready.go:93] pod "kube-proxy-956k4" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:43.162242  407433 pod_ready.go:82] duration metric: took 357.752321ms for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:43.162255  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:43.357413  407433 request.go:632] Waited for 195.066421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:22:43.357483  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:22:43.357488  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:43.357495  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:43.357499  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:43.361323  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.557269  407433 request.go:632] Waited for 195.302674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:22:43.557343  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:22:43.557353  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:43.557363  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:43.557371  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:43.561306  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.561756  407433 pod_ready.go:93] pod "kube-proxy-fkzqr" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:43.561775  407433 pod_ready.go:82] duration metric: took 399.513689ms for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:43.561786  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:43.757890  407433 request.go:632] Waited for 195.995772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:22:43.757962  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:22:43.757967  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:43.757976  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:43.757982  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:43.761766  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.957983  407433 request.go:632] Waited for 195.42551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:43.958056  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:43.958062  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:43.958072  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:43.958078  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:43.961672  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.962521  407433 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:43.962544  407433 pod_ready.go:82] duration metric: took 400.749029ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:43.962557  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:44.157168  407433 request.go:632] Waited for 194.496229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:22:44.157243  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:22:44.157249  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:44.157257  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:44.157261  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:44.161439  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:44.357491  407433 request.go:632] Waited for 195.406494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:44.357559  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:44.357564  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:44.357572  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:44.357576  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:44.361412  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:44.361938  407433 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:44.361961  407433 pod_ready.go:82] duration metric: took 399.39545ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:44.361973  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:44.557153  407433 request.go:632] Waited for 195.068437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:22:44.557219  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:22:44.557225  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:44.557232  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:44.557238  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:44.561658  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:44.757919  407433 request.go:632] Waited for 195.424954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:44.757989  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:44.757995  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:44.758002  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:44.758006  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:44.762165  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:44.763001  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:44.763028  407433 pod_ready.go:82] duration metric: took 401.047381ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:44.763043  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:44.957578  407433 request.go:632] Waited for 194.437629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:44.957639  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:44.957645  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:44.957653  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:44.957658  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:44.961740  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:45.157777  407433 request.go:632] Waited for 195.385409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.157877  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.157886  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:45.157896  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:45.157904  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:45.161679  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:45.357862  407433 request.go:632] Waited for 94.300622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:45.357935  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:45.357942  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:45.357954  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:45.357962  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:45.361543  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:45.557595  407433 request.go:632] Waited for 195.417854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.557668  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.557676  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:45.557688  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:45.557696  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:45.561494  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:45.763256  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:45.763291  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:45.763304  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:45.763309  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:45.767863  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:45.958122  407433 request.go:632] Waited for 189.42553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.958187  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.958192  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:45.958200  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:45.958204  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:45.961616  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:46.263956  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:46.263983  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:46.263995  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:46.264001  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:46.267522  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:46.357364  407433 request.go:632] Waited for 88.387695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:46.357421  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:46.357426  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:46.357434  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:46.357438  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:46.361662  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:46.764296  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:46.764325  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:46.764335  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:46.764341  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:46.770081  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:46.770843  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:46.770869  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:46.770883  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:46.770892  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:46.774337  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:46.775088  407433 pod_ready.go:103] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:47.263519  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:47.263548  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.263557  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.263562  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.267707  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:47.268340  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:47.268356  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.268365  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.268370  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.271526  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:47.764247  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:47.764274  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.764285  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.764289  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.767691  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:47.768263  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:47.768279  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.768287  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.768292  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.772194  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:47.772988  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:47.773010  407433 pod_ready.go:82] duration metric: took 3.009958286s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:47.773024  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:47.773091  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:22:47.773098  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.773107  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.773113  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.775884  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:47.957918  407433 request.go:632] Waited for 181.421489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:47.958003  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:47.958013  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.958025  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.958035  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.961770  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:47.962588  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:47.962607  407433 pod_ready.go:82] duration metric: took 189.574431ms for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:47.962618  407433 pod_ready.go:39] duration metric: took 19.902306936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:22:47.962636  407433 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:22:47.962704  407433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:22:47.981086  407433 api_server.go:72] duration metric: took 35.600013912s to wait for apiserver process to appear ...
	I1007 12:22:47.981128  407433 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:22:47.981157  407433 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:22:47.987797  407433 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:22:47.987879  407433 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:22:47.987885  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.987897  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.987904  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.988886  407433 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:22:47.989005  407433 api_server.go:141] control plane version: v1.31.1
	I1007 12:22:47.989025  407433 api_server.go:131] duration metric: took 7.889956ms to wait for apiserver health ...
	I1007 12:22:47.989046  407433 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:22:48.157497  407433 request.go:632] Waited for 168.357325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:22:48.157582  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:22:48.157590  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:48.157598  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:48.157606  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:48.170794  407433 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1007 12:22:48.178444  407433 system_pods.go:59] 26 kube-system pods found
	I1007 12:22:48.178489  407433 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:22:48.178500  407433 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:22:48.178507  407433 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:22:48.178511  407433 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:22:48.178515  407433 system_pods.go:61] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:22:48.178519  407433 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:22:48.178522  407433 system_pods.go:61] "kindnet-rwk2r" [8ec7b1f3-d6b5-4e44-8574-c197eb45bf28] Running
	I1007 12:22:48.178525  407433 system_pods.go:61] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:22:48.178529  407433 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:22:48.178533  407433 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:22:48.178538  407433 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:22:48.178543  407433 system_pods.go:61] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:22:48.178547  407433 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:22:48.178555  407433 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:22:48.178564  407433 system_pods.go:61] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:22:48.178570  407433 system_pods.go:61] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:22:48.178576  407433 system_pods.go:61] "kube-proxy-fkzqr" [16f7acfc-13b5-426d-9b0a-59a5131fc297] Running
	I1007 12:22:48.178581  407433 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:22:48.178586  407433 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:22:48.178602  407433 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:22:48.178610  407433 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:22:48.178613  407433 system_pods.go:61] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:22:48.178616  407433 system_pods.go:61] "kube-vip-ha-628553" [56148ec7-dffa-4dfc-8414-c9feb65b09d3] Running
	I1007 12:22:48.178619  407433 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:22:48.178622  407433 system_pods.go:61] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:22:48.178625  407433 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:22:48.178631  407433 system_pods.go:74] duration metric: took 189.575174ms to wait for pod list to return data ...
	I1007 12:22:48.178641  407433 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:22:48.358157  407433 request.go:632] Waited for 179.407704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:22:48.358239  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:22:48.358248  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:48.358260  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:48.358269  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:48.362697  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:48.363032  407433 default_sa.go:45] found service account: "default"
	I1007 12:22:48.363053  407433 default_sa.go:55] duration metric: took 184.404861ms for default service account to be created ...
	I1007 12:22:48.363066  407433 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:22:48.558136  407433 request.go:632] Waited for 194.970967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:22:48.558208  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:22:48.558217  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:48.558228  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:48.558234  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:48.569105  407433 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1007 12:22:48.578079  407433 system_pods.go:86] 26 kube-system pods found
	I1007 12:22:48.578116  407433 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:22:48.578125  407433 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:22:48.578132  407433 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:22:48.578136  407433 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:22:48.578140  407433 system_pods.go:89] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:22:48.578143  407433 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:22:48.578146  407433 system_pods.go:89] "kindnet-rwk2r" [8ec7b1f3-d6b5-4e44-8574-c197eb45bf28] Running
	I1007 12:22:48.578152  407433 system_pods.go:89] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:22:48.578156  407433 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:22:48.578162  407433 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:22:48.578167  407433 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:22:48.578172  407433 system_pods.go:89] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:22:48.578180  407433 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:22:48.578187  407433 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:22:48.578196  407433 system_pods.go:89] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:22:48.578202  407433 system_pods.go:89] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:22:48.578212  407433 system_pods.go:89] "kube-proxy-fkzqr" [16f7acfc-13b5-426d-9b0a-59a5131fc297] Running
	I1007 12:22:48.578218  407433 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:22:48.578223  407433 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:22:48.578230  407433 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:22:48.578236  407433 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:22:48.578244  407433 system_pods.go:89] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:22:48.578249  407433 system_pods.go:89] "kube-vip-ha-628553" [56148ec7-dffa-4dfc-8414-c9feb65b09d3] Running
	I1007 12:22:48.578257  407433 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:22:48.578262  407433 system_pods.go:89] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:22:48.578270  407433 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:22:48.578278  407433 system_pods.go:126] duration metric: took 215.203312ms to wait for k8s-apps to be running ...
	I1007 12:22:48.578288  407433 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:22:48.578337  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:22:48.595876  407433 system_svc.go:56] duration metric: took 17.573712ms WaitForService to wait for kubelet
	I1007 12:22:48.595992  407433 kubeadm.go:582] duration metric: took 36.214918629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:22:48.596027  407433 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:22:48.757411  407433 request.go:632] Waited for 161.243279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:22:48.757479  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:22:48.757486  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:48.757498  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:48.757506  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:48.761378  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:48.762688  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:22:48.762714  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:22:48.762729  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:22:48.762735  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:22:48.762740  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:22:48.762744  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:22:48.762749  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:22:48.762754  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:22:48.762760  407433 node_conditions.go:105] duration metric: took 166.71889ms to run NodePressure ...
	I1007 12:22:48.762780  407433 start.go:241] waiting for startup goroutines ...
	I1007 12:22:48.762829  407433 start.go:255] writing updated cluster config ...
	I1007 12:22:48.764946  407433 out.go:201] 
	I1007 12:22:48.766449  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:22:48.766593  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:22:48.768265  407433 out.go:177] * Starting "ha-628553-m03" control-plane node in "ha-628553" cluster
	I1007 12:22:48.769400  407433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:22:48.769441  407433 cache.go:56] Caching tarball of preloaded images
	I1007 12:22:48.769551  407433 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:22:48.769561  407433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:22:48.769658  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:22:48.769852  407433 start.go:360] acquireMachinesLock for ha-628553-m03: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:22:48.769900  407433 start.go:364] duration metric: took 26.129µs to acquireMachinesLock for "ha-628553-m03"
	I1007 12:22:48.769918  407433 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:22:48.769923  407433 fix.go:54] fixHost starting: m03
	I1007 12:22:48.770262  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:22:48.770311  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:22:48.788039  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45227
	I1007 12:22:48.788576  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:22:48.789119  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:22:48.789141  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:22:48.789458  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:22:48.789653  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:22:48.789784  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetState
	I1007 12:22:48.791257  407433 fix.go:112] recreateIfNeeded on ha-628553-m03: state=Stopped err=<nil>
	I1007 12:22:48.791281  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	W1007 12:22:48.791436  407433 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:22:48.793229  407433 out.go:177] * Restarting existing kvm2 VM for "ha-628553-m03" ...
	I1007 12:22:48.794588  407433 main.go:141] libmachine: (ha-628553-m03) Calling .Start
	I1007 12:22:48.794790  407433 main.go:141] libmachine: (ha-628553-m03) Ensuring networks are active...
	I1007 12:22:48.795632  407433 main.go:141] libmachine: (ha-628553-m03) Ensuring network default is active
	I1007 12:22:48.796059  407433 main.go:141] libmachine: (ha-628553-m03) Ensuring network mk-ha-628553 is active
	I1007 12:22:48.796459  407433 main.go:141] libmachine: (ha-628553-m03) Getting domain xml...
	I1007 12:22:48.797244  407433 main.go:141] libmachine: (ha-628553-m03) Creating domain...
	I1007 12:22:50.059311  407433 main.go:141] libmachine: (ha-628553-m03) Waiting to get IP...
	I1007 12:22:50.060372  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:50.060879  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:50.060964  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:50.060852  409345 retry.go:31] will retry after 192.791787ms: waiting for machine to come up
	I1007 12:22:50.255484  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:50.256001  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:50.256027  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:50.255953  409345 retry.go:31] will retry after 253.611969ms: waiting for machine to come up
	I1007 12:22:50.511637  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:50.512045  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:50.512063  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:50.512005  409345 retry.go:31] will retry after 324.599473ms: waiting for machine to come up
	I1007 12:22:50.838737  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:50.839303  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:50.839327  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:50.839255  409345 retry.go:31] will retry after 528.387577ms: waiting for machine to come up
	I1007 12:22:51.368905  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:51.369291  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:51.369315  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:51.369243  409345 retry.go:31] will retry after 720.335589ms: waiting for machine to come up
	I1007 12:22:52.091215  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:52.091630  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:52.091650  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:52.091588  409345 retry.go:31] will retry after 812.339657ms: waiting for machine to come up
	I1007 12:22:52.905101  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:52.905638  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:52.905670  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:52.905581  409345 retry.go:31] will retry after 1.091749856s: waiting for machine to come up
	I1007 12:22:53.999247  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:53.999746  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:53.999771  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:53.999680  409345 retry.go:31] will retry after 1.129170214s: waiting for machine to come up
	I1007 12:22:55.130925  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:55.131502  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:55.131537  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:55.131443  409345 retry.go:31] will retry after 1.171260829s: waiting for machine to come up
	I1007 12:22:56.304318  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:56.304945  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:56.304976  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:56.304894  409345 retry.go:31] will retry after 2.157722162s: waiting for machine to come up
	I1007 12:22:58.464571  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:58.464987  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:58.465010  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:58.464945  409345 retry.go:31] will retry after 2.28932583s: waiting for machine to come up
	I1007 12:23:00.756368  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:00.756994  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:23:00.757021  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:23:00.756934  409345 retry.go:31] will retry after 2.519358741s: waiting for machine to come up
	I1007 12:23:03.277504  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:03.277859  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:23:03.277897  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:23:03.277846  409345 retry.go:31] will retry after 3.670860774s: waiting for machine to come up
	I1007 12:23:06.951953  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:06.952402  407433 main.go:141] libmachine: (ha-628553-m03) Found IP for machine: 192.168.39.149
	I1007 12:23:06.952443  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has current primary IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:06.952454  407433 main.go:141] libmachine: (ha-628553-m03) Reserving static IP address...
	I1007 12:23:06.952862  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "ha-628553-m03", mac: "52:54:00:3c:9f:34", ip: "192.168.39.149"} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:06.952897  407433 main.go:141] libmachine: (ha-628553-m03) DBG | skip adding static IP to network mk-ha-628553 - found existing host DHCP lease matching {name: "ha-628553-m03", mac: "52:54:00:3c:9f:34", ip: "192.168.39.149"}
	I1007 12:23:06.952906  407433 main.go:141] libmachine: (ha-628553-m03) Reserved static IP address: 192.168.39.149
	I1007 12:23:06.952914  407433 main.go:141] libmachine: (ha-628553-m03) Waiting for SSH to be available...
	I1007 12:23:06.952927  407433 main.go:141] libmachine: (ha-628553-m03) DBG | Getting to WaitForSSH function...
	I1007 12:23:06.955043  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:06.955351  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:06.955381  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:06.955448  407433 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH client type: external
	I1007 12:23:06.955503  407433 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa (-rw-------)
	I1007 12:23:06.955539  407433 main.go:141] libmachine: (ha-628553-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:23:06.955561  407433 main.go:141] libmachine: (ha-628553-m03) DBG | About to run SSH command:
	I1007 12:23:06.955572  407433 main.go:141] libmachine: (ha-628553-m03) DBG | exit 0
	I1007 12:23:07.079169  407433 main.go:141] libmachine: (ha-628553-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 12:23:07.079565  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:23:07.080385  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:23:07.083418  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.083852  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.083879  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.084189  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:23:07.084545  407433 machine.go:93] provisionDockerMachine start ...
	I1007 12:23:07.084571  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:07.084826  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.087551  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.087978  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.088009  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.088182  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:07.088391  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.088547  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.088740  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:07.088923  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:23:07.089188  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:23:07.089206  407433 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:23:07.196059  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:23:07.196088  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:23:07.196335  407433 buildroot.go:166] provisioning hostname "ha-628553-m03"
	I1007 12:23:07.196347  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:23:07.196551  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.199203  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.199616  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.199644  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.199833  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:07.200016  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.200171  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.200290  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:07.200443  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:23:07.200715  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:23:07.200731  407433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m03 && echo "ha-628553-m03" | sudo tee /etc/hostname
	I1007 12:23:07.323544  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m03
	
	I1007 12:23:07.323582  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.326726  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.327122  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.327150  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.327368  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:07.327582  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.327771  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.327933  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:07.328149  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:23:07.328353  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:23:07.328376  407433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:23:07.450543  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:23:07.450579  407433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:23:07.450611  407433 buildroot.go:174] setting up certificates
	I1007 12:23:07.450626  407433 provision.go:84] configureAuth start
	I1007 12:23:07.450642  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:23:07.451018  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:23:07.454048  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.454630  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.454686  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.454833  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.457738  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.458176  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.458206  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.458383  407433 provision.go:143] copyHostCerts
	I1007 12:23:07.458422  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:23:07.458463  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:23:07.458473  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:23:07.458535  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:23:07.458607  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:23:07.458625  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:23:07.458631  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:23:07.458658  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:23:07.458702  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:23:07.458718  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:23:07.458724  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:23:07.458745  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:23:07.458791  407433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m03 san=[127.0.0.1 192.168.39.149 ha-628553-m03 localhost minikube]
	I1007 12:23:07.670948  407433 provision.go:177] copyRemoteCerts
	I1007 12:23:07.671039  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:23:07.671068  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.673765  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.674173  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.674201  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.674449  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:07.674674  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.674803  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:07.674918  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:23:07.758450  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:23:07.758534  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:23:07.784428  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:23:07.784519  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:23:07.810095  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:23:07.810186  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:23:07.836425  407433 provision.go:87] duration metric: took 385.779504ms to configureAuth
	I1007 12:23:07.836456  407433 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:23:07.836690  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:23:07.836767  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.839503  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.839941  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.839967  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.840189  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:07.840398  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.840560  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.840709  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:07.840928  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:23:07.841153  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:23:07.841174  407433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:23:08.085320  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:23:08.085371  407433 machine.go:96] duration metric: took 1.000808183s to provisionDockerMachine
	I1007 12:23:08.085390  407433 start.go:293] postStartSetup for "ha-628553-m03" (driver="kvm2")
	I1007 12:23:08.085410  407433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:23:08.085436  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.085777  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:23:08.085815  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:08.088687  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.089100  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.089153  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.089292  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:08.089520  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.089746  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:08.089915  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:23:08.175403  407433 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:23:08.180139  407433 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:23:08.180174  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:23:08.180280  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:23:08.180380  407433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:23:08.180392  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:23:08.180502  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:23:08.193008  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:23:08.221327  407433 start.go:296] duration metric: took 135.910859ms for postStartSetup
	I1007 12:23:08.221405  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.221767  407433 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 12:23:08.221797  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:08.224699  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.225137  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.225168  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.225344  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:08.225570  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.225756  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:08.225877  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:23:08.311077  407433 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1007 12:23:08.311172  407433 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1007 12:23:08.370424  407433 fix.go:56] duration metric: took 19.600489752s for fixHost
	I1007 12:23:08.370480  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:08.373852  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.374234  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.374267  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.374431  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:08.374676  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.374884  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.375076  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:08.375312  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:23:08.375552  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:23:08.375573  407433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:23:08.484001  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728303788.441688765
	
	I1007 12:23:08.484028  407433 fix.go:216] guest clock: 1728303788.441688765
	I1007 12:23:08.484036  407433 fix.go:229] Guest: 2024-10-07 12:23:08.441688765 +0000 UTC Remote: 2024-10-07 12:23:08.370456366 +0000 UTC m=+402.287892272 (delta=71.232399ms)
	I1007 12:23:08.484062  407433 fix.go:200] guest clock delta is within tolerance: 71.232399ms
	I1007 12:23:08.484071  407433 start.go:83] releasing machines lock for "ha-628553-m03", held for 19.714158797s
	I1007 12:23:08.484104  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.484386  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:23:08.487120  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.487548  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.487576  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.489523  407433 out.go:177] * Found network options:
	I1007 12:23:08.490983  407433 out.go:177]   - NO_PROXY=192.168.39.110,192.168.39.169
	W1007 12:23:08.492418  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:23:08.492450  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:23:08.492471  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.493243  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.493459  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.493570  407433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:23:08.493623  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	W1007 12:23:08.493646  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:23:08.493673  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:23:08.493743  407433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:23:08.493761  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:08.496386  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.496480  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.496868  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.496897  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.496924  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.496943  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.497079  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:08.497346  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.497377  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:08.497541  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:08.497696  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.497840  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:23:08.497866  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:08.498023  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:23:08.730433  407433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:23:08.737080  407433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:23:08.737155  407433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:23:08.755299  407433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:23:08.755325  407433 start.go:495] detecting cgroup driver to use...
	I1007 12:23:08.755389  407433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:23:08.780038  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:23:08.795377  407433 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:23:08.795440  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:23:08.811910  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:23:08.828314  407433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:23:08.951245  407433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:23:09.115120  407433 docker.go:233] disabling docker service ...
	I1007 12:23:09.115225  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:23:09.133356  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:23:09.148971  407433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:23:09.293835  407433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:23:09.423867  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:23:09.439087  407433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:23:09.458897  407433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:23:09.459001  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.469902  407433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:23:09.469994  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.481722  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.492505  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.505280  407433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:23:09.518945  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.530830  407433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.554731  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.569925  407433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:23:09.580795  407433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:23:09.580888  407433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:23:09.597673  407433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:23:09.612157  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:23:09.766539  407433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:23:09.880706  407433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:23:09.880792  407433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:23:09.885746  407433 start.go:563] Will wait 60s for crictl version
	I1007 12:23:09.885814  407433 ssh_runner.go:195] Run: which crictl
	I1007 12:23:09.889812  407433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:23:09.937961  407433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:23:09.938036  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:23:09.967760  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:23:09.998712  407433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:23:10.000182  407433 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:23:10.001820  407433 out.go:177]   - env NO_PROXY=192.168.39.110,192.168.39.169
	I1007 12:23:10.003205  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:23:10.006117  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:10.006523  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:10.006555  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:10.006741  407433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:23:10.011690  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:23:10.025541  407433 mustload.go:65] Loading cluster: ha-628553
	I1007 12:23:10.025766  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:23:10.026028  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:23:10.026071  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:23:10.041914  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I1007 12:23:10.042428  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:23:10.042951  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:23:10.042983  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:23:10.043362  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:23:10.043554  407433 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:23:10.045158  407433 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:23:10.045562  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:23:10.045608  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:23:10.083352  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I1007 12:23:10.083776  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:23:10.084261  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:23:10.084287  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:23:10.084725  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:23:10.084948  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:23:10.085117  407433 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.149
	I1007 12:23:10.085130  407433 certs.go:194] generating shared ca certs ...
	I1007 12:23:10.085148  407433 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:23:10.085306  407433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:23:10.085370  407433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:23:10.085384  407433 certs.go:256] generating profile certs ...
	I1007 12:23:10.085494  407433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:23:10.085567  407433 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5
	I1007 12:23:10.085617  407433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:23:10.085634  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:23:10.085655  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:23:10.085672  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:23:10.085688  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:23:10.085710  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:23:10.085739  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:23:10.085758  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:23:10.085776  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:23:10.085842  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:23:10.085885  407433 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:23:10.085899  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:23:10.085932  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:23:10.085965  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:23:10.085997  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:23:10.086048  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:23:10.086084  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:23:10.086104  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:23:10.086121  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:23:10.086157  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:23:10.089488  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:23:10.089878  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:23:10.089907  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:23:10.090103  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:23:10.090299  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:23:10.090474  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:23:10.090656  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:23:10.163437  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:23:10.168780  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:23:10.180806  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:23:10.185150  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:23:10.198300  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:23:10.203414  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:23:10.216836  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:23:10.222330  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:23:10.234652  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:23:10.239420  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:23:10.252193  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:23:10.256802  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:23:10.268584  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:23:10.295050  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:23:10.320755  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:23:10.347772  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:23:10.373490  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:23:10.399842  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:23:10.425371  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:23:10.452365  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:23:10.479533  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:23:10.504233  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:23:10.528470  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:23:10.553208  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:23:10.571603  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:23:10.591578  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:23:10.614225  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:23:10.634324  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:23:10.653367  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:23:10.670424  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:23:10.687921  407433 ssh_runner.go:195] Run: openssl version
	I1007 12:23:10.693659  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:23:10.705376  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:23:10.710726  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:23:10.710791  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:23:10.718248  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:23:10.732612  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:23:10.745398  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:23:10.750153  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:23:10.750214  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:23:10.756370  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:23:10.768784  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:23:10.780787  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:23:10.785548  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:23:10.785622  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:23:10.791760  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:23:10.803743  407433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:23:10.808736  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:23:10.814899  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:23:10.821143  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:23:10.827606  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:23:10.833912  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:23:10.840134  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:23:10.846577  407433 kubeadm.go:934] updating node {m03 192.168.39.149 8443 v1.31.1 crio true true} ...
	I1007 12:23:10.846676  407433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:23:10.846714  407433 kube-vip.go:115] generating kube-vip config ...
	I1007 12:23:10.846760  407433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:23:10.864581  407433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:23:10.864668  407433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:23:10.864739  407433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:23:10.875792  407433 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:23:10.875886  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:23:10.886447  407433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:23:10.904363  407433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:23:10.922695  407433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:23:10.940459  407433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:23:10.944764  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:23:10.958113  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:23:11.105627  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:23:11.125550  407433 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:23:11.125888  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:23:11.127716  407433 out.go:177] * Verifying Kubernetes components...
	I1007 12:23:11.129145  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:23:11.305386  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:23:11.325083  407433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:23:11.325389  407433 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:23:11.325462  407433 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:23:11.325756  407433 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m03" to be "Ready" ...
	I1007 12:23:11.325833  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:11.325841  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:11.325849  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:11.325852  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:11.329984  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:11.826772  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:11.826797  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:11.826807  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:11.826812  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:11.831688  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:11.832196  407433 node_ready.go:49] node "ha-628553-m03" has status "Ready":"True"
	I1007 12:23:11.832220  407433 node_ready.go:38] duration metric: took 506.44323ms for node "ha-628553-m03" to be "Ready" ...
	I1007 12:23:11.832245  407433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:23:11.832336  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:23:11.832347  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:11.832358  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:11.832365  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:11.848204  407433 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1007 12:23:11.861310  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:11.861435  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:11.861446  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:11.861458  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:11.861466  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:11.870384  407433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:23:11.871506  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:11.871524  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:11.871535  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:11.871541  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:11.877552  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:23:12.361651  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:12.361681  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:12.361692  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:12.361698  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:12.365468  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:12.366272  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:12.366289  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:12.366297  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:12.366302  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:12.369324  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:12.862243  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:12.862279  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:12.862291  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:12.862297  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:12.867091  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:12.868063  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:12.868089  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:12.868100  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:12.868106  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:12.871897  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:13.361624  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:13.361649  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:13.361658  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:13.361662  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:13.365490  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:13.366348  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:13.366364  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:13.366373  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:13.366377  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:13.369870  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:13.862332  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:13.862356  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:13.862365  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:13.862368  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:13.866523  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:13.867251  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:13.867269  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:13.867277  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:13.867282  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:13.870634  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:13.871447  407433 pod_ready.go:103] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:14.362193  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:14.362228  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:14.362240  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:14.362245  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:14.366181  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:14.367066  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:14.367088  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:14.367100  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:14.367106  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:14.370503  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:14.862599  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:14.862626  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:14.862640  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:14.862646  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:14.867107  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:14.867797  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:14.867817  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:14.867825  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:14.867830  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:14.871636  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:15.362516  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:15.362542  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:15.362550  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:15.362585  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:15.366026  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:15.366840  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:15.366856  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:15.366863  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:15.366868  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:15.369831  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:15.861611  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:15.861634  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:15.861642  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:15.861647  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:15.866159  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:15.866896  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:15.866915  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:15.866922  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:15.866927  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:15.870596  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:16.361830  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:16.361863  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:16.361872  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:16.361876  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:16.366367  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:16.367293  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:16.367315  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:16.367327  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:16.367332  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:16.371071  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:16.371636  407433 pod_ready.go:103] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:16.862048  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:16.862076  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:16.862086  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:16.862092  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:16.866414  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:16.867130  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:16.867151  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:16.867163  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:16.867167  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:16.870850  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:17.362394  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:17.362418  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:17.362426  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:17.362430  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:17.366486  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:17.367294  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:17.367312  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:17.367320  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:17.367324  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:17.371106  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:17.862513  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:17.862539  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:17.862548  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:17.862554  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:17.866633  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:17.867337  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:17.867354  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:17.867363  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:17.867367  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:17.870721  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:18.361539  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:18.361562  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:18.361573  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:18.361578  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:18.365313  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:18.366026  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:18.366043  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:18.366053  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:18.366058  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:18.369343  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:18.861585  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:18.861610  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:18.861618  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:18.861621  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:18.865321  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:18.866215  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:18.866239  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:18.866250  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:18.866254  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:18.869184  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:18.869834  407433 pod_ready.go:103] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:19.361628  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:19.361652  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:19.361661  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:19.361665  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:19.365480  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:19.367101  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:19.367123  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:19.367137  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:19.367143  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:19.370524  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:19.861746  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:19.861771  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:19.861780  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:19.861785  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:19.865697  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:19.866576  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:19.866601  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:19.866613  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:19.866621  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:19.869999  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.362008  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:20.362035  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.362046  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.362052  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.365798  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.366543  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:20.366570  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.366583  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.366588  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.370420  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.862465  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:20.862494  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.862506  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.862512  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.866743  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:20.867603  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:20.867625  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.867637  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.867646  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.876747  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:23:20.877196  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:20.877215  407433 pod_ready.go:82] duration metric: took 9.015873885s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.877228  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.877303  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:23:20.877313  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.877323  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.877329  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.880598  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.881340  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:20.881359  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.881367  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.881373  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.884755  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.885234  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:20.885253  407433 pod_ready.go:82] duration metric: took 8.017506ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.885264  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.885338  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:23:20.885346  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.885356  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.885363  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.888642  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.889384  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:20.889408  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.889417  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.889423  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.892846  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.893352  407433 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:20.893371  407433 pod_ready.go:82] duration metric: took 8.101384ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.893381  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.893450  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:23:20.893457  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.893465  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.893469  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.896263  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:20.897009  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:20.897028  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.897039  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.897045  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.900030  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:20.900719  407433 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:20.900743  407433 pod_ready.go:82] duration metric: took 7.354933ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.900758  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.900849  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:20.900859  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.900870  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.900878  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.904334  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.905453  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:20.905472  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.905483  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.905489  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.908818  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:21.401777  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:21.401802  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:21.401810  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:21.401816  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:21.405454  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:21.406241  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:21.406263  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:21.406275  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:21.406281  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:21.409714  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:21.901278  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:21.901305  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:21.901318  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:21.901322  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:21.905374  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:21.906206  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:21.906228  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:21.906239  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:21.906245  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:21.909773  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:22.401497  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:22.401525  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:22.401536  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:22.401541  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:22.405874  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:22.407120  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:22.407144  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:22.407155  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:22.407161  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:22.413762  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:23:22.901518  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:22.901544  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:22.901552  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:22.901557  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:22.906234  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:22.907167  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:22.907190  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:22.907200  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:22.907205  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:22.910393  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:22.910825  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:23.401248  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:23.401280  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:23.401293  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:23.401298  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:23.407107  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:23:23.408075  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:23.408096  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:23.408106  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:23.408111  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:23.415961  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:23:23.901287  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:23.901319  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:23.901331  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:23.901337  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:23.905904  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:23.906565  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:23.906581  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:23.906590  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:23.906595  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:23.910006  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:24.401161  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:24.401190  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:24.401202  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:24.401209  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:24.404839  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:24.405564  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:24.405583  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:24.405593  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:24.405598  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:24.408750  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:24.901100  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:24.901137  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:24.901151  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:24.901156  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:24.905321  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:24.906076  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:24.906098  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:24.906110  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:24.906116  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:24.909394  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:25.402019  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:25.402048  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:25.402060  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:25.402066  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:25.406096  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:25.406780  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:25.406800  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:25.406811  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:25.406817  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:25.410372  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:25.411120  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:25.901431  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:25.901462  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:25.901476  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:25.901485  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:25.905181  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:25.905913  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:25.905932  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:25.905943  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:25.905948  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:25.909147  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:26.401392  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:26.401413  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:26.401422  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:26.401425  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:26.404670  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:26.405486  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:26.405509  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:26.405524  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:26.405531  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:26.408669  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:26.901799  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:26.901824  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:26.901836  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:26.901841  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:26.905889  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:26.906791  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:26.906814  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:26.906825  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:26.906833  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:26.910509  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:27.401061  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:27.401091  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:27.401101  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:27.401107  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:27.404502  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:27.405491  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:27.405516  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:27.405531  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:27.405537  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:27.408535  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:27.901655  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:27.901682  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:27.901693  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:27.901698  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:27.906766  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:23:27.907910  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:27.907930  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:27.907943  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:27.907949  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:27.910831  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:27.911452  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:28.401384  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:28.401412  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:28.401421  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:28.401426  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:28.405773  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:28.406658  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:28.406679  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:28.406690  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:28.406697  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:28.409901  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:28.901341  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:28.901371  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:28.901380  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:28.901389  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:28.905747  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:28.907292  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:28.907314  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:28.907326  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:28.907331  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:28.910952  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:29.401668  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:29.401703  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:29.401714  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:29.401719  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:29.405845  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:29.406720  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:29.406742  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:29.406753  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:29.406757  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:29.409965  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:29.901326  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:29.901360  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:29.901369  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:29.901373  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:29.905350  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:29.906192  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:29.906223  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:29.906235  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:29.906243  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:29.910387  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:30.401772  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:30.401801  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:30.401813  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:30.401819  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:30.406389  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:30.407392  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:30.407416  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:30.407429  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:30.407436  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:30.410951  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:30.411545  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:30.901925  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:30.901958  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:30.901970  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:30.901977  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:30.905611  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:30.906422  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:30.906444  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:30.906455  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:30.906460  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:30.910537  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:31.401800  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:31.401827  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:31.401836  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:31.401840  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:31.406134  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:31.407148  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:31.407173  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:31.407191  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:31.407197  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:31.410926  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:31.901827  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:31.901858  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:31.901870  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:31.901878  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:31.906665  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:31.907501  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:31.907537  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:31.907549  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:31.907555  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:31.911140  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:32.400976  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:32.401003  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:32.401014  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:32.401019  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:32.405547  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:32.406242  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:32.406258  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:32.406265  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:32.406269  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:32.409439  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:32.901149  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:32.901181  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:32.901193  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:32.901198  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:32.905022  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:32.905716  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:32.905734  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:32.905744  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:32.905748  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:32.909130  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:32.909906  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:33.401283  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:33.401309  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:33.401318  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:33.401325  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:33.404886  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:33.405856  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:33.405881  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:33.405893  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:33.405901  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:33.409501  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:33.901882  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:33.901915  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:33.901925  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:33.901928  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:33.905378  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:33.906066  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:33.906083  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:33.906091  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:33.906095  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:33.909006  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:34.401235  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:34.401260  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:34.401269  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:34.401272  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:34.404838  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:34.406012  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:34.406031  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:34.406039  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:34.406045  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:34.409124  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:34.900951  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:34.900983  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:34.900993  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:34.900997  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:34.905147  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:34.905757  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:34.905776  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:34.905787  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:34.905794  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:34.908507  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:35.401863  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:35.401890  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:35.401901  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:35.401906  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:35.405387  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:35.406401  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:35.406446  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:35.406455  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:35.406459  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:35.409292  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:35.409806  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:35.901149  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:35.901196  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:35.901221  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:35.901229  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:35.904816  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:35.905574  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:35.905592  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:35.905602  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:35.905609  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:35.908238  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:36.401546  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:36.401580  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:36.401593  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:36.401598  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:36.405148  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:36.406022  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:36.406039  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:36.406048  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:36.406056  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:36.408821  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:36.901819  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:36.901855  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:36.901867  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:36.901876  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:36.905550  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:36.906357  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:36.906377  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:36.906387  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:36.906391  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:36.909398  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:37.401226  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:37.401258  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:37.401271  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:37.401279  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:37.406353  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:23:37.406945  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:37.406977  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:37.406989  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:37.406998  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:37.410073  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:37.410643  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:37.901879  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:37.901906  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:37.901917  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:37.901922  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:37.906062  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:37.906861  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:37.906877  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:37.906888  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:37.906895  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:37.910684  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:38.401696  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:38.401722  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:38.401731  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:38.401734  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:38.406385  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:38.407114  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:38.407137  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:38.407145  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:38.407150  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:38.410220  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:38.901333  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:38.901362  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:38.901371  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:38.901375  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:38.905673  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:38.906342  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:38.906358  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:38.906367  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:38.906372  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:38.909538  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:39.401617  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:39.401647  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:39.401658  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:39.401665  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:39.405325  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:39.406247  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:39.406263  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:39.406271  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:39.406275  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:39.408869  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:39.901009  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:39.901066  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:39.901079  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:39.901088  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:39.905186  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:39.906287  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:39.906303  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:39.906312  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:39.906316  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:39.909386  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:39.909910  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:40.401887  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:40.401919  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:40.401932  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:40.401938  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:40.405563  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:40.406179  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:40.406196  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:40.406204  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:40.406207  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:40.409031  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:40.901914  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:40.901947  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:40.901959  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:40.901964  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:40.905577  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:40.906160  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:40.906178  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:40.906187  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:40.906192  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:40.909439  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:41.401762  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:41.401788  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:41.401796  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:41.401801  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:41.405508  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:41.406192  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:41.406210  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:41.406219  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:41.406222  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:41.409156  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:41.901039  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:41.901069  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:41.901082  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:41.901088  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:41.904730  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:41.905638  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:41.905657  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:41.905667  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:41.905672  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:41.908263  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:42.401627  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:42.401654  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:42.401663  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:42.401668  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:42.406041  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:42.406703  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:42.406722  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:42.406730  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:42.406734  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:42.409745  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:42.410239  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:42.901587  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:42.901614  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:42.901622  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:42.901625  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:42.905571  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:42.906277  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:42.906295  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:42.906303  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:42.906307  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:42.909677  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:43.401661  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:43.401689  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:43.401697  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:43.401703  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:43.405188  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:43.406046  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:43.406065  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:43.406073  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:43.406077  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:43.409716  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:43.901229  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:43.901256  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:43.901263  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:43.901268  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:43.905115  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:43.905912  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:43.905929  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:43.905937  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:43.905941  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:43.908930  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:44.401253  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:44.401281  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.401293  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.401297  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.405017  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.406070  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:44.406089  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.406097  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.406101  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.409080  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:44.409563  407433 pod_ready.go:93] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.409581  407433 pod_ready.go:82] duration metric: took 23.50881715s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.409602  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.409726  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:23:44.409737  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.409744  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.409749  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.412715  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:44.413235  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:44.413247  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.413255  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.413258  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.416010  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:44.416481  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.416502  407433 pod_ready.go:82] duration metric: took 6.890773ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.416513  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.416581  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:23:44.416590  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.416598  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.416603  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.419667  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.420424  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:44.420458  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.420470  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.420476  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.423889  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.424313  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.424334  407433 pod_ready.go:82] duration metric: took 7.814307ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.424348  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.424417  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:23:44.424427  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.424437  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.424444  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.428190  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.428882  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:44.428900  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.428911  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.428918  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.431588  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:44.432108  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.432137  407433 pod_ready.go:82] duration metric: took 7.779602ms for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.432151  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.432238  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:23:44.432249  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.432260  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.432266  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.435639  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.436600  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:44.436617  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.436626  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.436630  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.440567  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.441253  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.441273  407433 pod_ready.go:82] duration metric: took 9.114345ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.441284  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.601679  407433 request.go:632] Waited for 160.319206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:23:44.601747  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:23:44.601755  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.601764  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.601768  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.605498  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.801775  407433 request.go:632] Waited for 195.353982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:44.801836  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:44.801841  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.801849  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.801854  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.805954  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:44.806553  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.806577  407433 pod_ready.go:82] duration metric: took 365.285871ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.806590  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:45.002112  407433 request.go:632] Waited for 195.437696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:23:45.002184  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:23:45.002191  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:45.002201  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:45.002211  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:45.006294  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:45.202056  407433 request.go:632] Waited for 194.857504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:45.202132  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:45.202139  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:45.202151  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:45.202157  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:45.205444  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:45.402165  407433 request.go:632] Waited for 95.263491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:23:45.402239  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:23:45.402248  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:45.402258  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:45.402264  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:45.409336  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:23:45.601415  407433 request.go:632] Waited for 191.299121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:45.601498  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:45.601503  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:45.601511  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:45.601518  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:45.604967  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:45.605527  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:45.605552  407433 pod_ready.go:82] duration metric: took 798.95512ms for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:45.605564  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:45.802045  407433 request.go:632] Waited for 196.398573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:23:45.802132  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:23:45.802140  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:45.802150  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:45.802158  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:45.806194  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:46.001912  407433 request.go:632] Waited for 194.996337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:46.001992  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:46.001999  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:46.002009  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:46.002025  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:46.005973  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:46.006462  407433 pod_ready.go:93] pod "kube-proxy-956k4" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:46.006490  407433 pod_ready.go:82] duration metric: took 400.920874ms for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:46.006503  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:46.201881  407433 request.go:632] Waited for 195.304463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:23:46.201942  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:23:46.201948  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:46.201955  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:46.201960  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:46.205784  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:46.402338  407433 request.go:632] Waited for 195.651209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:23:46.402414  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:23:46.402420  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:46.402429  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:46.402433  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:46.405950  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:46.406728  407433 pod_ready.go:98] node "ha-628553-m04" hosting pod "kube-proxy-fkzqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-628553-m04" has status "Ready":"Unknown"
	I1007 12:23:46.406754  407433 pod_ready.go:82] duration metric: took 400.24566ms for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	E1007 12:23:46.406764  407433 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-628553-m04" hosting pod "kube-proxy-fkzqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-628553-m04" has status "Ready":"Unknown"
	I1007 12:23:46.406771  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:46.601841  407433 request.go:632] Waited for 194.991422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:23:46.601928  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:23:46.601934  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:46.601942  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:46.601950  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:46.606194  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:46.802178  407433 request.go:632] Waited for 195.348094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:46.802282  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:46.802291  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:46.802300  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:46.802307  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:46.806011  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:46.806717  407433 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:46.806740  407433 pod_ready.go:82] duration metric: took 399.962338ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:46.806751  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:47.001862  407433 request.go:632] Waited for 195.011199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:23:47.001951  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:23:47.001958  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:47.001970  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:47.001976  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:47.005786  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:47.202192  407433 request.go:632] Waited for 195.404826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:47.202272  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:47.202278  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:47.202289  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:47.202296  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:47.205737  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:47.206263  407433 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:47.206285  407433 pod_ready.go:82] duration metric: took 399.527218ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:47.206296  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:47.401271  407433 request.go:632] Waited for 194.871758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:23:47.401377  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:23:47.401387  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:47.401398  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:47.401407  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:47.405036  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:47.602200  407433 request.go:632] Waited for 196.363571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:47.602263  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:47.602270  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:47.602281  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:47.602286  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:47.606027  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:47.606573  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:47.606596  407433 pod_ready.go:82] duration metric: took 400.293688ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:47.606608  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:47.801693  407433 request.go:632] Waited for 194.969862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:23:47.801777  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:23:47.801786  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:47.801799  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:47.801809  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:47.805884  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:48.002025  407433 request.go:632] Waited for 195.383914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:48.002106  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:48.002112  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.002122  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.002129  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.006411  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:48.007140  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:48.007161  407433 pod_ready.go:82] duration metric: took 400.547189ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:48.007171  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:48.202325  407433 request.go:632] Waited for 195.078729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:23:48.202388  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:23:48.202393  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.202401  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.202413  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.207192  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:48.402151  407433 request.go:632] Waited for 193.426943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:48.402240  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:48.402248  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.402260  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.402270  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.406156  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:48.406819  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:48.406846  407433 pod_ready.go:82] duration metric: took 399.667367ms for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:48.406866  407433 pod_ready.go:39] duration metric: took 36.574596709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:23:48.406888  407433 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:23:48.406948  407433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:23:48.432065  407433 api_server.go:72] duration metric: took 37.306445342s to wait for apiserver process to appear ...
	I1007 12:23:48.432098  407433 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:23:48.432125  407433 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:23:48.439718  407433 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:23:48.439838  407433 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:23:48.439852  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.439865  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.439875  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.440922  407433 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1007 12:23:48.441046  407433 api_server.go:141] control plane version: v1.31.1
	I1007 12:23:48.441083  407433 api_server.go:131] duration metric: took 8.977422ms to wait for apiserver health ...
	I1007 12:23:48.441105  407433 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:23:48.601351  407433 request.go:632] Waited for 160.153035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:23:48.601433  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:23:48.601449  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.601460  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.601466  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.608187  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:23:48.616433  407433 system_pods.go:59] 26 kube-system pods found
	I1007 12:23:48.616470  407433 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:23:48.616475  407433 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:23:48.616479  407433 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:23:48.616489  407433 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:23:48.616492  407433 system_pods.go:61] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:23:48.616527  407433 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:23:48.616535  407433 system_pods.go:61] "kindnet-rwk2r" [8ec7b1f3-d6b5-4e44-8574-c197eb45bf28] Running
	I1007 12:23:48.616543  407433 system_pods.go:61] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:23:48.616547  407433 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:23:48.616550  407433 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:23:48.616554  407433 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:23:48.616557  407433 system_pods.go:61] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:23:48.616561  407433 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:23:48.616566  407433 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:23:48.616570  407433 system_pods.go:61] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:23:48.616575  407433 system_pods.go:61] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:23:48.616578  407433 system_pods.go:61] "kube-proxy-fkzqr" [16f7acfc-13b5-426d-9b0a-59a5131fc297] Running
	I1007 12:23:48.616582  407433 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:23:48.616585  407433 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:23:48.616588  407433 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:23:48.616595  407433 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:23:48.616600  407433 system_pods.go:61] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:23:48.616603  407433 system_pods.go:61] "kube-vip-ha-628553" [56148ec7-dffa-4dfc-8414-c9feb65b09d3] Running
	I1007 12:23:48.616607  407433 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:23:48.616612  407433 system_pods.go:61] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:23:48.616616  407433 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:23:48.616621  407433 system_pods.go:74] duration metric: took 175.509164ms to wait for pod list to return data ...
	I1007 12:23:48.616631  407433 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:23:48.802229  407433 request.go:632] Waited for 185.508899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:23:48.802303  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:23:48.802312  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.802321  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.802329  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.806434  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:48.806628  407433 default_sa.go:45] found service account: "default"
	I1007 12:23:48.806657  407433 default_sa.go:55] duration metric: took 190.017985ms for default service account to be created ...
	I1007 12:23:48.806671  407433 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:23:49.002213  407433 request.go:632] Waited for 195.441972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:23:49.002280  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:23:49.002285  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:49.002293  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:49.002296  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:49.008706  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:23:49.017329  407433 system_pods.go:86] 26 kube-system pods found
	I1007 12:23:49.017363  407433 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:23:49.017374  407433 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:23:49.017378  407433 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:23:49.017382  407433 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:23:49.017385  407433 system_pods.go:89] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:23:49.017392  407433 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:23:49.017396  407433 system_pods.go:89] "kindnet-rwk2r" [8ec7b1f3-d6b5-4e44-8574-c197eb45bf28] Running
	I1007 12:23:49.017399  407433 system_pods.go:89] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:23:49.017403  407433 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:23:49.017406  407433 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:23:49.017410  407433 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:23:49.017413  407433 system_pods.go:89] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:23:49.017417  407433 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:23:49.017420  407433 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:23:49.017424  407433 system_pods.go:89] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:23:49.017429  407433 system_pods.go:89] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:23:49.017436  407433 system_pods.go:89] "kube-proxy-fkzqr" [16f7acfc-13b5-426d-9b0a-59a5131fc297] Running
	I1007 12:23:49.017439  407433 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:23:49.017442  407433 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:23:49.017446  407433 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:23:49.017449  407433 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:23:49.017452  407433 system_pods.go:89] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:23:49.017460  407433 system_pods.go:89] "kube-vip-ha-628553" [56148ec7-dffa-4dfc-8414-c9feb65b09d3] Running
	I1007 12:23:49.017466  407433 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:23:49.017469  407433 system_pods.go:89] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:23:49.017472  407433 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:23:49.017478  407433 system_pods.go:126] duration metric: took 210.798472ms to wait for k8s-apps to be running ...
	I1007 12:23:49.017486  407433 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:23:49.017535  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:23:49.032835  407433 system_svc.go:56] duration metric: took 15.336372ms WaitForService to wait for kubelet
	I1007 12:23:49.032876  407433 kubeadm.go:582] duration metric: took 37.907263247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:23:49.032902  407433 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:23:49.201334  407433 request.go:632] Waited for 168.278903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:23:49.201430  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:23:49.201441  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:49.201453  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:49.201463  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:49.205415  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:49.206770  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:23:49.206795  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:23:49.206820  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:23:49.206824  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:23:49.206828  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:23:49.206831  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:23:49.206834  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:23:49.206837  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:23:49.206841  407433 node_conditions.go:105] duration metric: took 173.93387ms to run NodePressure ...
	I1007 12:23:49.206856  407433 start.go:241] waiting for startup goroutines ...
	I1007 12:23:49.206880  407433 start.go:255] writing updated cluster config ...
	I1007 12:23:49.209205  407433 out.go:201] 
	I1007 12:23:49.210753  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:23:49.210885  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:23:49.212476  407433 out.go:177] * Starting "ha-628553-m04" worker node in "ha-628553" cluster
	I1007 12:23:49.213667  407433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:23:49.213695  407433 cache.go:56] Caching tarball of preloaded images
	I1007 12:23:49.213837  407433 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:23:49.213856  407433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:23:49.213989  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:23:49.214215  407433 start.go:360] acquireMachinesLock for ha-628553-m04: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:23:49.214284  407433 start.go:364] duration metric: took 33.583µs to acquireMachinesLock for "ha-628553-m04"
	I1007 12:23:49.214305  407433 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:23:49.214322  407433 fix.go:54] fixHost starting: m04
	I1007 12:23:49.214728  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:23:49.214773  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:23:49.230817  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I1007 12:23:49.231251  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:23:49.231746  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:23:49.231765  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:23:49.232170  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:23:49.232389  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:23:49.232578  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetState
	I1007 12:23:49.234354  407433 fix.go:112] recreateIfNeeded on ha-628553-m04: state=Stopped err=<nil>
	I1007 12:23:49.234381  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	W1007 12:23:49.234559  407433 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:23:49.236807  407433 out.go:177] * Restarting existing kvm2 VM for "ha-628553-m04" ...
	I1007 12:23:49.238021  407433 main.go:141] libmachine: (ha-628553-m04) Calling .Start
	I1007 12:23:49.238250  407433 main.go:141] libmachine: (ha-628553-m04) Ensuring networks are active...
	I1007 12:23:49.239018  407433 main.go:141] libmachine: (ha-628553-m04) Ensuring network default is active
	I1007 12:23:49.239377  407433 main.go:141] libmachine: (ha-628553-m04) Ensuring network mk-ha-628553 is active
	I1007 12:23:49.239771  407433 main.go:141] libmachine: (ha-628553-m04) Getting domain xml...
	I1007 12:23:49.240336  407433 main.go:141] libmachine: (ha-628553-m04) Creating domain...
	I1007 12:23:50.530662  407433 main.go:141] libmachine: (ha-628553-m04) Waiting to get IP...
	I1007 12:23:50.531807  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:50.532326  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:50.532394  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:50.532303  409719 retry.go:31] will retry after 303.800673ms: waiting for machine to come up
	I1007 12:23:50.838195  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:50.838893  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:50.838921  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:50.838836  409719 retry.go:31] will retry after 239.89794ms: waiting for machine to come up
	I1007 12:23:51.080318  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:51.080882  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:51.080918  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:51.080819  409719 retry.go:31] will retry after 362.373785ms: waiting for machine to come up
	I1007 12:23:51.445366  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:51.445901  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:51.445933  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:51.445831  409719 retry.go:31] will retry after 593.154236ms: waiting for machine to come up
	I1007 12:23:52.040581  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:52.040920  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:52.040951  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:52.040850  409719 retry.go:31] will retry after 511.859545ms: waiting for machine to come up
	I1007 12:23:52.554682  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:52.555211  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:52.555242  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:52.555144  409719 retry.go:31] will retry after 783.145525ms: waiting for machine to come up
	I1007 12:23:53.340031  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:53.340503  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:53.340534  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:53.340434  409719 retry.go:31] will retry after 890.686855ms: waiting for machine to come up
	I1007 12:23:54.233201  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:54.233851  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:54.233881  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:54.233799  409719 retry.go:31] will retry after 1.106716095s: waiting for machine to come up
	I1007 12:23:55.341582  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:55.342089  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:55.342118  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:55.342042  409719 retry.go:31] will retry after 1.803926987s: waiting for machine to come up
	I1007 12:23:57.148067  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:57.148434  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:57.148461  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:57.148414  409719 retry.go:31] will retry after 1.623538456s: waiting for machine to come up
	I1007 12:23:58.773300  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:58.773907  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:58.773939  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:58.773829  409719 retry.go:31] will retry after 2.479088328s: waiting for machine to come up
	I1007 12:24:01.254457  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:01.254920  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:24:01.254943  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:24:01.254879  409719 retry.go:31] will retry after 3.27298755s: waiting for machine to come up
	I1007 12:24:04.529276  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:04.529763  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:24:04.529785  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:24:04.529715  409719 retry.go:31] will retry after 4.066059297s: waiting for machine to come up
	I1007 12:24:08.600875  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.601416  407433 main.go:141] libmachine: (ha-628553-m04) Found IP for machine: 192.168.39.119
	I1007 12:24:08.601443  407433 main.go:141] libmachine: (ha-628553-m04) Reserving static IP address...
	I1007 12:24:08.601457  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has current primary IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.601784  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "ha-628553-m04", mac: "52:54:00:be:c5:aa", ip: "192.168.39.119"} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.601805  407433 main.go:141] libmachine: (ha-628553-m04) DBG | skip adding static IP to network mk-ha-628553 - found existing host DHCP lease matching {name: "ha-628553-m04", mac: "52:54:00:be:c5:aa", ip: "192.168.39.119"}
	I1007 12:24:08.601822  407433 main.go:141] libmachine: (ha-628553-m04) Reserved static IP address: 192.168.39.119
	I1007 12:24:08.601831  407433 main.go:141] libmachine: (ha-628553-m04) Waiting for SSH to be available...
	I1007 12:24:08.601839  407433 main.go:141] libmachine: (ha-628553-m04) DBG | Getting to WaitForSSH function...
	I1007 12:24:08.604097  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.604455  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.604490  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.604617  407433 main.go:141] libmachine: (ha-628553-m04) DBG | Using SSH client type: external
	I1007 12:24:08.604677  407433 main.go:141] libmachine: (ha-628553-m04) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa (-rw-------)
	I1007 12:24:08.604709  407433 main.go:141] libmachine: (ha-628553-m04) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:24:08.604742  407433 main.go:141] libmachine: (ha-628553-m04) DBG | About to run SSH command:
	I1007 12:24:08.604755  407433 main.go:141] libmachine: (ha-628553-m04) DBG | exit 0
	I1007 12:24:08.735165  407433 main.go:141] libmachine: (ha-628553-m04) DBG | SSH cmd err, output: <nil>: 
	I1007 12:24:08.735522  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetConfigRaw
	I1007 12:24:08.736240  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetIP
	I1007 12:24:08.738754  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.739240  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.739275  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.739554  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:24:08.739795  407433 machine.go:93] provisionDockerMachine start ...
	I1007 12:24:08.739817  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:08.740027  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:08.742193  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.742545  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.742591  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.742720  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:08.742919  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.743124  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.743284  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:08.743457  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:24:08.743708  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 12:24:08.743724  407433 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:24:08.859645  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:24:08.859677  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetMachineName
	I1007 12:24:08.859942  407433 buildroot.go:166] provisioning hostname "ha-628553-m04"
	I1007 12:24:08.859983  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetMachineName
	I1007 12:24:08.860195  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:08.862887  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.863255  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.863299  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.863433  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:08.863605  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.863763  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.863862  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:08.864017  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:24:08.864194  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 12:24:08.864210  407433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m04 && echo "ha-628553-m04" | sudo tee /etc/hostname
	I1007 12:24:08.995163  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m04
	
	I1007 12:24:08.995198  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:08.998357  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.998766  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.998795  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.999025  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:08.999243  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.999431  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.999596  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:08.999802  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:24:09.000029  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 12:24:09.000051  407433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:24:09.124992  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:24:09.125028  407433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:24:09.125052  407433 buildroot.go:174] setting up certificates
	I1007 12:24:09.125065  407433 provision.go:84] configureAuth start
	I1007 12:24:09.125074  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetMachineName
	I1007 12:24:09.125469  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetIP
	I1007 12:24:09.128005  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.128375  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.128408  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.128554  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.130978  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.131391  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.131423  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.131729  407433 provision.go:143] copyHostCerts
	I1007 12:24:09.131771  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:24:09.131814  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:24:09.131827  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:24:09.131912  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:24:09.132028  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:24:09.132059  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:24:09.132066  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:24:09.132109  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:24:09.132181  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:24:09.132210  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:24:09.132215  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:24:09.132249  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:24:09.132336  407433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m04 san=[127.0.0.1 192.168.39.119 ha-628553-m04 localhost minikube]
	I1007 12:24:09.195630  407433 provision.go:177] copyRemoteCerts
	I1007 12:24:09.195723  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:24:09.195760  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.199172  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.199536  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.199565  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.199754  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:09.199952  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.200120  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:09.200284  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:24:09.293649  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:24:09.293723  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:24:09.323884  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:24:09.323974  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:24:09.352261  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:24:09.352355  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:24:09.379011  407433 provision.go:87] duration metric: took 253.929279ms to configureAuth
	I1007 12:24:09.379083  407433 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:24:09.379380  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:24:09.379482  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.382453  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.382893  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.382923  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.383117  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:09.383360  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.383596  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.383820  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:09.383993  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:24:09.384244  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 12:24:09.384260  407433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:24:09.632687  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:24:09.632714  407433 machine.go:96] duration metric: took 892.90566ms to provisionDockerMachine
	I1007 12:24:09.632727  407433 start.go:293] postStartSetup for "ha-628553-m04" (driver="kvm2")
	I1007 12:24:09.632738  407433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:24:09.632759  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:09.633108  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:24:09.633151  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.636346  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.636754  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.636792  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.637016  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:09.637214  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.637375  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:09.637486  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:24:09.727849  407433 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:24:09.732599  407433 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:24:09.732635  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:24:09.732727  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:24:09.732823  407433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:24:09.732837  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:24:09.732954  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:24:09.743228  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:24:09.769603  407433 start.go:296] duration metric: took 136.841708ms for postStartSetup
	I1007 12:24:09.769664  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:09.770065  407433 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 12:24:09.770109  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.772848  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.773402  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.773447  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.773610  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:09.773816  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.774011  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:09.774210  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:24:09.866866  407433 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1007 12:24:09.866952  407433 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1007 12:24:09.926582  407433 fix.go:56] duration metric: took 20.712259155s for fixHost
	I1007 12:24:09.926637  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.929943  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.930427  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.930457  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.930779  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:09.931041  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.931239  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.931404  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:09.931583  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:24:09.931821  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 12:24:09.931839  407433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:24:10.052238  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728303850.026883254
	
	I1007 12:24:10.052264  407433 fix.go:216] guest clock: 1728303850.026883254
	I1007 12:24:10.052271  407433 fix.go:229] Guest: 2024-10-07 12:24:10.026883254 +0000 UTC Remote: 2024-10-07 12:24:09.926613197 +0000 UTC m=+463.844049172 (delta=100.270057ms)
	I1007 12:24:10.052289  407433 fix.go:200] guest clock delta is within tolerance: 100.270057ms
	I1007 12:24:10.052294  407433 start.go:83] releasing machines lock for "ha-628553-m04", held for 20.837998474s
	I1007 12:24:10.052314  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:10.052639  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetIP
	I1007 12:24:10.055673  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.056063  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:10.056109  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.058302  407433 out.go:177] * Found network options:
	I1007 12:24:10.060025  407433 out.go:177]   - NO_PROXY=192.168.39.110,192.168.39.169,192.168.39.149
	W1007 12:24:10.061387  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:24:10.061420  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:24:10.061432  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:24:10.061458  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:10.062052  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:10.062220  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:10.062317  407433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:24:10.062357  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	W1007 12:24:10.062471  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:24:10.062498  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:24:10.062511  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:24:10.062599  407433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:24:10.062623  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:10.065003  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.065178  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.065378  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:10.065403  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.065574  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:10.065589  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:10.065629  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.065766  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:10.065776  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:10.065941  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:10.065946  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:10.066052  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:10.066122  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:24:10.066197  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:24:10.295113  407433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:24:10.303392  407433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:24:10.303485  407433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:24:10.322649  407433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:24:10.322683  407433 start.go:495] detecting cgroup driver to use...
	I1007 12:24:10.322757  407433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:24:10.344603  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:24:10.361918  407433 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:24:10.361994  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:24:10.378103  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:24:10.395313  407433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:24:10.539031  407433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:24:10.698607  407433 docker.go:233] disabling docker service ...
	I1007 12:24:10.698680  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:24:10.714061  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:24:10.732030  407433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:24:10.889095  407433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:24:11.018542  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:24:11.033237  407433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:24:11.055141  407433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:24:11.055262  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.067312  407433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:24:11.067393  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.079866  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.092168  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.104042  407433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:24:11.117127  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.130033  407433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.149837  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.161801  407433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:24:11.171884  407433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:24:11.171961  407433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:24:11.186081  407433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:24:11.198005  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:24:11.320021  407433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:24:11.419036  407433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:24:11.419128  407433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:24:11.424768  407433 start.go:563] Will wait 60s for crictl version
	I1007 12:24:11.424850  407433 ssh_runner.go:195] Run: which crictl
	I1007 12:24:11.429617  407433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:24:11.477303  407433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:24:11.477390  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:24:11.509335  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:24:11.543903  407433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:24:11.545292  407433 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:24:11.546729  407433 out.go:177]   - env NO_PROXY=192.168.39.110,192.168.39.169
	I1007 12:24:11.548180  407433 out.go:177]   - env NO_PROXY=192.168.39.110,192.168.39.169,192.168.39.149
	I1007 12:24:11.549562  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetIP
	I1007 12:24:11.552864  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:11.553327  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:11.553360  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:11.553659  407433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:24:11.558394  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:24:11.573119  407433 mustload.go:65] Loading cluster: ha-628553
	I1007 12:24:11.573407  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:24:11.573795  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:24:11.573848  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:24:11.590317  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
	I1007 12:24:11.590869  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:24:11.591440  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:24:11.591464  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:24:11.591796  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:24:11.591994  407433 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:24:11.593783  407433 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:24:11.594165  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:24:11.594216  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:24:11.610436  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I1007 12:24:11.610984  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:24:11.611543  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:24:11.611566  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:24:11.612084  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:24:11.612283  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:24:11.612454  407433 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.119
	I1007 12:24:11.612468  407433 certs.go:194] generating shared ca certs ...
	I1007 12:24:11.612487  407433 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:24:11.612655  407433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:24:11.612707  407433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:24:11.612726  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:24:11.612746  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:24:11.612762  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:24:11.612778  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:24:11.612849  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:24:11.612891  407433 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:24:11.612907  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:24:11.612938  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:24:11.612970  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:24:11.613001  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:24:11.613050  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:24:11.613088  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:24:11.613107  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:24:11.613124  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:24:11.613152  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:24:11.644899  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:24:11.672793  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:24:11.699075  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:24:11.728119  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:24:11.755027  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:24:11.781899  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:24:11.809176  407433 ssh_runner.go:195] Run: openssl version
	I1007 12:24:11.815973  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:24:11.828522  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:24:11.833206  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:24:11.833281  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:24:11.839689  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:24:11.850931  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:24:11.862646  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:24:11.867557  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:24:11.867622  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:24:11.873559  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:24:11.886128  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:24:11.898496  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:24:11.903740  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:24:11.903830  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:24:11.910375  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:24:11.923085  407433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:24:11.927900  407433 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:24:11.927956  407433 kubeadm.go:934] updating node {m04 192.168.39.119 0 v1.31.1  false true} ...
	I1007 12:24:11.928056  407433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:24:11.928132  407433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:24:11.939738  407433 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:24:11.939830  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1007 12:24:11.951094  407433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:24:11.970139  407433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:24:11.989618  407433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:24:11.994178  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:24:12.008011  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:24:12.131341  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:24:12.151246  407433 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1007 12:24:12.151624  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:24:12.154458  407433 out.go:177] * Verifying Kubernetes components...
	I1007 12:24:12.156015  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:24:12.347894  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:24:12.373838  407433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:24:12.374206  407433 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:24:12.374306  407433 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:24:12.374617  407433 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m04" to be "Ready" ...
	I1007 12:24:12.374742  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:12.374755  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.374772  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.374783  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.378952  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:12.874926  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:12.874951  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.874978  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.874984  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.878534  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:12.879022  407433 node_ready.go:49] node "ha-628553-m04" has status "Ready":"True"
	I1007 12:24:12.879045  407433 node_ready.go:38] duration metric: took 504.401986ms for node "ha-628553-m04" to be "Ready" ...
	I1007 12:24:12.879059  407433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:24:12.879143  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:24:12.879154  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.879166  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.879174  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.884847  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:24:12.893298  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.893432  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:24:12.893447  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.893458  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.893465  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.897638  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:12.898370  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:12.898388  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.898396  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.898400  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.901304  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.901875  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:12.901901  407433 pod_ready.go:82] duration metric: took 8.568632ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.901917  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.902001  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:24:12.902009  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.902017  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.902024  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.905015  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.905856  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:12.905879  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.905887  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.905890  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.908998  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:12.909611  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:12.909632  407433 pod_ready.go:82] duration metric: took 7.704219ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.909643  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.909711  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:24:12.909719  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.909727  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.909733  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.912570  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.913034  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:12.913047  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.913055  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.913060  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.915920  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.916595  407433 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:12.916619  407433 pod_ready.go:82] duration metric: took 6.969737ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.916631  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.916698  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:24:12.916708  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.916716  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.916720  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.919049  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.919698  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:12.919716  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.919727  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.919732  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.922473  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.922974  407433 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:12.922997  407433 pod_ready.go:82] duration metric: took 6.358628ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.923011  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:13.075490  407433 request.go:632] Waited for 152.391076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:24:13.075561  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:24:13.075567  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:13.075575  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:13.075580  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:13.079745  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:13.275957  407433 request.go:632] Waited for 195.439243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:13.276022  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:13.276029  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:13.276038  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:13.276044  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:13.280027  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:13.280708  407433 pod_ready.go:93] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:13.280727  407433 pod_ready.go:82] duration metric: took 357.709145ms for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:13.280747  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:13.475839  407433 request.go:632] Waited for 195.001393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:24:13.475898  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:24:13.475904  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:13.475912  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:13.475922  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:13.479095  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:13.675375  407433 request.go:632] Waited for 195.417553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:13.675447  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:13.675453  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:13.675462  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:13.675469  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:13.679265  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:13.679878  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:13.679901  407433 pod_ready.go:82] duration metric: took 399.147153ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:13.679911  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:13.875768  407433 request.go:632] Waited for 195.749757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:24:13.875851  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:24:13.875863  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:13.875878  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:13.875887  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:13.879948  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:14.075272  407433 request.go:632] Waited for 194.404378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:14.075382  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:14.075394  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:14.075409  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:14.075420  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:14.079859  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:14.080336  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:14.080360  407433 pod_ready.go:82] duration metric: took 400.441209ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:14.080373  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:14.275394  407433 request.go:632] Waited for 194.922319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:24:14.275475  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:24:14.275484  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:14.275496  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:14.275508  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:14.279992  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:14.475315  407433 request.go:632] Waited for 194.387646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:14.475396  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:14.475405  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:14.475441  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:14.475451  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:14.483945  407433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:24:14.484501  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:14.484526  407433 pod_ready.go:82] duration metric: took 404.144521ms for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:14.484544  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:14.675489  407433 request.go:632] Waited for 190.836423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:24:14.675562  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:24:14.675569  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:14.675577  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:14.675583  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:14.679839  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:14.875299  407433 request.go:632] Waited for 194.392469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:14.875398  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:14.875410  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:14.875424  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:14.875432  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:14.879672  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:14.880203  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:14.880226  407433 pod_ready.go:82] duration metric: took 395.672855ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:14.880241  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:15.075526  407433 request.go:632] Waited for 195.189096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:24:15.075642  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:24:15.075657  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:15.075670  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:15.075680  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:15.079700  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:15.275039  407433 request.go:632] Waited for 194.41743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:15.275106  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:15.275111  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:15.275120  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:15.275124  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:15.279593  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:15.280146  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:15.280172  407433 pod_ready.go:82] duration metric: took 399.921739ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:15.280187  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:15.475240  407433 request.go:632] Waited for 194.951573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:24:15.475322  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:24:15.475331  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:15.475344  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:15.475352  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:15.479019  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:15.675276  407433 request.go:632] Waited for 195.277446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:15.675361  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:15.675369  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:15.675384  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:15.675394  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:15.678988  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:15.679750  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:15.679773  407433 pod_ready.go:82] duration metric: took 399.578882ms for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:15.679786  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:15.875879  407433 request.go:632] Waited for 196.016204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:24:15.875965  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:24:15.875971  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:15.875977  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:15.875984  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:15.879439  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:16.075738  407433 request.go:632] Waited for 195.358684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:16.075835  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:16.075843  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:16.075854  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:16.075915  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:16.080069  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:16.080869  407433 pod_ready.go:93] pod "kube-proxy-956k4" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:16.080901  407433 pod_ready.go:82] duration metric: took 401.107884ms for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:16.080917  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:16.275390  407433 request.go:632] Waited for 194.353063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:24:16.275486  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:24:16.275495  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:16.275506  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:16.275514  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:16.280026  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:16.475130  407433 request.go:632] Waited for 194.263013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:16.475202  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:16.475208  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:16.475220  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:16.475230  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:16.479156  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:16.674953  407433 request.go:632] Waited for 93.299818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:24:16.675053  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:24:16.675059  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:16.675067  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:16.675073  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:16.679150  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:16.875324  407433 request.go:632] Waited for 195.43361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:16.875417  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:16.875422  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:16.875431  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:16.875439  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:16.878887  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:17.081707  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:24:17.081732  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:17.081740  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:17.081744  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:17.085874  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:17.275068  407433 request.go:632] Waited for 188.303218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:17.275143  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:17.275149  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:17.275159  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:17.275169  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:17.279302  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:17.280427  407433 pod_ready.go:93] pod "kube-proxy-fkzqr" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:17.280456  407433 pod_ready.go:82] duration metric: took 1.199530131s for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:17.280471  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:17.475955  407433 request.go:632] Waited for 195.373968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:24:17.476036  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:24:17.476042  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:17.476050  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:17.476054  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:17.480604  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:17.675952  407433 request.go:632] Waited for 194.407397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:17.676034  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:17.676046  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:17.676055  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:17.676066  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:17.679557  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:17.680215  407433 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:17.680249  407433 pod_ready.go:82] duration metric: took 399.768958ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:17.680264  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:17.875340  407433 request.go:632] Waited for 194.957231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:24:17.875449  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:24:17.875462  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:17.875474  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:17.875484  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:17.880414  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:18.075604  407433 request.go:632] Waited for 194.415341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:18.075685  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:18.075695  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:18.075706  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:18.075745  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:18.080238  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:18.081052  407433 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:18.081084  407433 pod_ready.go:82] duration metric: took 400.80865ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:18.081120  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:18.275983  407433 request.go:632] Waited for 194.754645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:24:18.276047  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:24:18.276052  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:18.276060  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:18.276075  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:18.280458  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:18.475558  407433 request.go:632] Waited for 194.403216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:18.475636  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:18.475644  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:18.475655  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:18.475662  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:18.479831  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:18.480490  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:18.480514  407433 pod_ready.go:82] duration metric: took 399.379545ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:18.480527  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:18.675666  407433 request.go:632] Waited for 195.040798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:24:18.675726  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:24:18.675732  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:18.675740  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:18.675745  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:18.765318  407433 round_trippers.go:574] Response Status: 200 OK in 89 milliseconds
	I1007 12:24:18.875428  407433 request.go:632] Waited for 109.26966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:18.875492  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:18.875499  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:18.875511  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:18.875518  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:18.879039  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:18.880174  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:18.880197  407433 pod_ready.go:82] duration metric: took 399.66167ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:18.880208  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:19.075186  407433 request.go:632] Waited for 194.895451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:24:19.075252  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:24:19.075258  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:19.075265  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:19.075269  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:19.078730  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:19.275491  407433 request.go:632] Waited for 195.988081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:19.275568  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:19.275575  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:19.275588  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:19.275600  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:19.279147  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:19.279845  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:19.279865  407433 pod_ready.go:82] duration metric: took 399.650057ms for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:19.279877  407433 pod_ready.go:39] duration metric: took 6.400801701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:24:19.279894  407433 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:24:19.279956  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:24:19.295966  407433 system_svc.go:56] duration metric: took 16.062104ms WaitForService to wait for kubelet
	I1007 12:24:19.295995  407433 kubeadm.go:582] duration metric: took 7.144696991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:24:19.296018  407433 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:24:19.475506  407433 request.go:632] Waited for 179.37132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:24:19.475571  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:24:19.475579  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:19.475594  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:19.475604  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:19.479543  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:19.481199  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:24:19.481225  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:24:19.481236  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:24:19.481239  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:24:19.481243  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:24:19.481246  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:24:19.481250  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:24:19.481253  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:24:19.481258  407433 node_conditions.go:105] duration metric: took 185.233923ms to run NodePressure ...
	I1007 12:24:19.481270  407433 start.go:241] waiting for startup goroutines ...
	I1007 12:24:19.481290  407433 start.go:255] writing updated cluster config ...
	I1007 12:24:19.481593  407433 ssh_runner.go:195] Run: rm -f paused
	I1007 12:24:19.533793  407433 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:24:19.536976  407433 out.go:177] * Done! kubectl is now configured to use "ha-628553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.530489151Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303860530464378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3298c0ab-1f2b-4d31-89c1-8b301c584abc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.530999134Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=06693e2a-7945-47f7-9340-3426a6561244 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.531053168Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=06693e2a-7945-47f7-9340-3426a6561244 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.531327290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5e97a40eecef136562a0989ce713ee25ed922c902d9067caddbfeee92b95fd8,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728303791742109052,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b121c4ce3f1c89617c1956de3bda850aa7df80ece2e55818c025ed03056dd739,PodSandboxId:8f3f66727ce1365593b28e63e649be987881153fc1dee7f4411e615f3062bd88,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303762625936143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b22cd52cf94f58f06a2f709e80ba61098a2e7fdfb76f690099d678912f9b19,PodSandboxId:aa6d4b081f68e17f4b8e697261e0c1b77db8c50c86e50eb3920acfea96816c1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1728303761407740922,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa8694f118c36b612561775e109d541e2a915f312d37d6e3be467a057106e52,PodSandboxId:ae1ce8ae39c0a3f0ebc9445e9a6282a68d71e6ea1d5b420140ef0d735cb39c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761279049523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-5407-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\"
:53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ec3666316198103acd7321ee499a1c56d87b82aa2872b2c455d2d56d79c00,PodSandboxId:80740c542dfa153e08d5e004001447f8625d19fa1c0d59dd5698a49011d35871,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761188755535,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bda594b13b22c3f3f156b468934651bfa4e3d35962aa8b65ccb85b2db3385e7,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728303761015467151,Labels:map[stri
ng]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b7814517854b690eef4baa06ba056aa5af0f6fad15d83fb160d2962677836,PodSandboxId:e9a17fae1d59f0a1f4cd6cedb57e8596bccee204e61dd7798da94c478152a769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728303761014117856,Labels:map[string]string{io.kubernet
es.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f3bec737bfa85e116bde94fa78421a9c6ed6155b02d6687349eded1294e2c,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728303746495412294,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e8270b7e13c41f1067c7a2b2c48735878a1ac270029bf9bc40d0cf539e6ab4,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728303738460827275,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484445a153ab89a201903c72ad1e56ff571951cfce2a89c1851318ea9522b4b1,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728303718484189839,Labels:map[string]string{io.kubernetes.container.name
: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a6976c1286a07048e803e2a844dc480948730521d22549e3eb0f742fbccc91,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728303716062185224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager
,io.kubernetes.pod.name: kube-controller-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcf62f683c219b741bda686149685e6169c8c1cbaa701e6ed54e473f53abac,PodSandboxId:5994bdf27bafee82146b6f8274d09d5805aaae55232bc8d15e6119990968d7c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728303715994464544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha
-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed82ce8b3bcd3f7d7e0d03d070c770ca1bd35d0d60c615b2ae6d0cf80b7d2c16,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728303715975854020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fb6227eb362f9d9b97a269451c541ab7c49c72f67128bee5659d44d441d54d,PodSandboxId:7ed281d4e14b42ab7ffe767ed7ee9b3ac644cab85e2ed4ecd79c789353c64949,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728303715902605465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-628553,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=06693e2a-7945-47f7-9340-3426a6561244 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.585252181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f211dc1f-096e-43ed-ac7b-c7c3de9d7e88 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.585346288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f211dc1f-096e-43ed-ac7b-c7c3de9d7e88 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.586846406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31ba369b-bc87-46b5-8eb5-63888afde7c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.587260420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303860587236714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31ba369b-bc87-46b5-8eb5-63888afde7c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.587965257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfc68092-690f-4b78-980e-52fb09e04292 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.588039910Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfc68092-690f-4b78-980e-52fb09e04292 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.588381576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5e97a40eecef136562a0989ce713ee25ed922c902d9067caddbfeee92b95fd8,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728303791742109052,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b121c4ce3f1c89617c1956de3bda850aa7df80ece2e55818c025ed03056dd739,PodSandboxId:8f3f66727ce1365593b28e63e649be987881153fc1dee7f4411e615f3062bd88,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303762625936143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b22cd52cf94f58f06a2f709e80ba61098a2e7fdfb76f690099d678912f9b19,PodSandboxId:aa6d4b081f68e17f4b8e697261e0c1b77db8c50c86e50eb3920acfea96816c1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1728303761407740922,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa8694f118c36b612561775e109d541e2a915f312d37d6e3be467a057106e52,PodSandboxId:ae1ce8ae39c0a3f0ebc9445e9a6282a68d71e6ea1d5b420140ef0d735cb39c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761279049523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-5407-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\"
:53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ec3666316198103acd7321ee499a1c56d87b82aa2872b2c455d2d56d79c00,PodSandboxId:80740c542dfa153e08d5e004001447f8625d19fa1c0d59dd5698a49011d35871,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761188755535,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bda594b13b22c3f3f156b468934651bfa4e3d35962aa8b65ccb85b2db3385e7,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728303761015467151,Labels:map[stri
ng]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b7814517854b690eef4baa06ba056aa5af0f6fad15d83fb160d2962677836,PodSandboxId:e9a17fae1d59f0a1f4cd6cedb57e8596bccee204e61dd7798da94c478152a769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728303761014117856,Labels:map[string]string{io.kubernet
es.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f3bec737bfa85e116bde94fa78421a9c6ed6155b02d6687349eded1294e2c,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728303746495412294,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e8270b7e13c41f1067c7a2b2c48735878a1ac270029bf9bc40d0cf539e6ab4,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728303738460827275,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484445a153ab89a201903c72ad1e56ff571951cfce2a89c1851318ea9522b4b1,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728303718484189839,Labels:map[string]string{io.kubernetes.container.name
: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a6976c1286a07048e803e2a844dc480948730521d22549e3eb0f742fbccc91,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728303716062185224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager
,io.kubernetes.pod.name: kube-controller-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcf62f683c219b741bda686149685e6169c8c1cbaa701e6ed54e473f53abac,PodSandboxId:5994bdf27bafee82146b6f8274d09d5805aaae55232bc8d15e6119990968d7c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728303715994464544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha
-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed82ce8b3bcd3f7d7e0d03d070c770ca1bd35d0d60c615b2ae6d0cf80b7d2c16,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728303715975854020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fb6227eb362f9d9b97a269451c541ab7c49c72f67128bee5659d44d441d54d,PodSandboxId:7ed281d4e14b42ab7ffe767ed7ee9b3ac644cab85e2ed4ecd79c789353c64949,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728303715902605465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-628553,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfc68092-690f-4b78-980e-52fb09e04292 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.640641724Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a999c7ac-a76d-4b45-8e69-6f7c92cf2867 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.640826674Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a999c7ac-a76d-4b45-8e69-6f7c92cf2867 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.641916374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67aaf309-d618-4bbb-b4ee-bdd1d5e20a7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.642412925Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303860642387700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67aaf309-d618-4bbb-b4ee-bdd1d5e20a7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.642899819Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=acaf8017-42ba-4356-a63c-7865de0c2bd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.642970556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=acaf8017-42ba-4356-a63c-7865de0c2bd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.643243766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5e97a40eecef136562a0989ce713ee25ed922c902d9067caddbfeee92b95fd8,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728303791742109052,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b121c4ce3f1c89617c1956de3bda850aa7df80ece2e55818c025ed03056dd739,PodSandboxId:8f3f66727ce1365593b28e63e649be987881153fc1dee7f4411e615f3062bd88,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303762625936143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b22cd52cf94f58f06a2f709e80ba61098a2e7fdfb76f690099d678912f9b19,PodSandboxId:aa6d4b081f68e17f4b8e697261e0c1b77db8c50c86e50eb3920acfea96816c1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1728303761407740922,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa8694f118c36b612561775e109d541e2a915f312d37d6e3be467a057106e52,PodSandboxId:ae1ce8ae39c0a3f0ebc9445e9a6282a68d71e6ea1d5b420140ef0d735cb39c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761279049523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-5407-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\"
:53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ec3666316198103acd7321ee499a1c56d87b82aa2872b2c455d2d56d79c00,PodSandboxId:80740c542dfa153e08d5e004001447f8625d19fa1c0d59dd5698a49011d35871,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761188755535,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bda594b13b22c3f3f156b468934651bfa4e3d35962aa8b65ccb85b2db3385e7,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728303761015467151,Labels:map[stri
ng]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b7814517854b690eef4baa06ba056aa5af0f6fad15d83fb160d2962677836,PodSandboxId:e9a17fae1d59f0a1f4cd6cedb57e8596bccee204e61dd7798da94c478152a769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728303761014117856,Labels:map[string]string{io.kubernet
es.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f3bec737bfa85e116bde94fa78421a9c6ed6155b02d6687349eded1294e2c,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728303746495412294,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e8270b7e13c41f1067c7a2b2c48735878a1ac270029bf9bc40d0cf539e6ab4,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728303738460827275,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484445a153ab89a201903c72ad1e56ff571951cfce2a89c1851318ea9522b4b1,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728303718484189839,Labels:map[string]string{io.kubernetes.container.name
: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a6976c1286a07048e803e2a844dc480948730521d22549e3eb0f742fbccc91,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728303716062185224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager
,io.kubernetes.pod.name: kube-controller-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcf62f683c219b741bda686149685e6169c8c1cbaa701e6ed54e473f53abac,PodSandboxId:5994bdf27bafee82146b6f8274d09d5805aaae55232bc8d15e6119990968d7c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728303715994464544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha
-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed82ce8b3bcd3f7d7e0d03d070c770ca1bd35d0d60c615b2ae6d0cf80b7d2c16,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728303715975854020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fb6227eb362f9d9b97a269451c541ab7c49c72f67128bee5659d44d441d54d,PodSandboxId:7ed281d4e14b42ab7ffe767ed7ee9b3ac644cab85e2ed4ecd79c789353c64949,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728303715902605465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-628553,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=acaf8017-42ba-4356-a63c-7865de0c2bd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.695028422Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae050f96-3969-4412-932d-688dceccc213 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.695100678Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae050f96-3969-4412-932d-688dceccc213 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.696344362Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bf9c605-4121-44c2-b7a0-5139a7bd05f8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.696934247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303860696908132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bf9c605-4121-44c2-b7a0-5139a7bd05f8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.697521768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67a4ce01-4f0c-4de1-8f44-ef7928e7d700 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.697597484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67a4ce01-4f0c-4de1-8f44-ef7928e7d700 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:24:20 ha-628553 crio[948]: time="2024-10-07 12:24:20.697951385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5e97a40eecef136562a0989ce713ee25ed922c902d9067caddbfeee92b95fd8,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728303791742109052,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b121c4ce3f1c89617c1956de3bda850aa7df80ece2e55818c025ed03056dd739,PodSandboxId:8f3f66727ce1365593b28e63e649be987881153fc1dee7f4411e615f3062bd88,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303762625936143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b22cd52cf94f58f06a2f709e80ba61098a2e7fdfb76f690099d678912f9b19,PodSandboxId:aa6d4b081f68e17f4b8e697261e0c1b77db8c50c86e50eb3920acfea96816c1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1728303761407740922,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:baa8694f118c36b612561775e109d541e2a915f312d37d6e3be467a057106e52,PodSandboxId:ae1ce8ae39c0a3f0ebc9445e9a6282a68d71e6ea1d5b420140ef0d735cb39c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761279049523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-5407-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\"
:53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ec3666316198103acd7321ee499a1c56d87b82aa2872b2c455d2d56d79c00,PodSandboxId:80740c542dfa153e08d5e004001447f8625d19fa1c0d59dd5698a49011d35871,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761188755535,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annota
tions:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bda594b13b22c3f3f156b468934651bfa4e3d35962aa8b65ccb85b2db3385e7,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728303761015467151,Labels:map[stri
ng]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b7814517854b690eef4baa06ba056aa5af0f6fad15d83fb160d2962677836,PodSandboxId:e9a17fae1d59f0a1f4cd6cedb57e8596bccee204e61dd7798da94c478152a769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728303761014117856,Labels:map[string]string{io.kubernet
es.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f3bec737bfa85e116bde94fa78421a9c6ed6155b02d6687349eded1294e2c,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728303746495412294,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9e8270b7e13c41f1067c7a2b2c48735878a1ac270029bf9bc40d0cf539e6ab4,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728303738460827275,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484445a153ab89a201903c72ad1e56ff571951cfce2a89c1851318ea9522b4b1,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728303718484189839,Labels:map[string]string{io.kubernetes.container.name
: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a6976c1286a07048e803e2a844dc480948730521d22549e3eb0f742fbccc91,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728303716062185224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager
,io.kubernetes.pod.name: kube-controller-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcf62f683c219b741bda686149685e6169c8c1cbaa701e6ed54e473f53abac,PodSandboxId:5994bdf27bafee82146b6f8274d09d5805aaae55232bc8d15e6119990968d7c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728303715994464544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha
-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed82ce8b3bcd3f7d7e0d03d070c770ca1bd35d0d60c615b2ae6d0cf80b7d2c16,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728303715975854020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fb6227eb362f9d9b97a269451c541ab7c49c72f67128bee5659d44d441d54d,PodSandboxId:7ed281d4e14b42ab7ffe767ed7ee9b3ac644cab85e2ed4ecd79c789353c64949,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728303715902605465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-628553,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67a4ce01-4f0c-4de1-8f44-ef7928e7d700 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c5e97a40eecef       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       3                   17641b07f74e7       storage-provisioner
	b121c4ce3f1c8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   About a minute ago   Running             busybox                   1                   8f3f66727ce13       busybox-7dff88458-vc5k8
	d3b22cd52cf94       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   aa6d4b081f68e       kindnet-snf5v
	baa8694f118c3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   ae1ce8ae39c0a       coredns-7c65d6cfc9-ktmzq
	686ec36663161       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   80740c542dfa1       coredns-7c65d6cfc9-rsr6v
	9bda594b13b22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       2                   17641b07f74e7       storage-provisioner
	e98b781451785       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   e9a17fae1d59f       kube-proxy-h6vg8
	225f3bec737bf       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   30929f0f1c9a5       kube-controller-manager-ha-628553
	e9e8270b7e13c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Running             kube-apiserver            2                   ba58486de78c0       kube-apiserver-ha-628553
	484445a153ab8       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     2 minutes ago        Running             kube-vip                  0                   b6f9f9b6f13de       kube-vip-ha-628553
	f0a6976c1286a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   30929f0f1c9a5       kube-controller-manager-ha-628553
	f0bcf62f683c2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   5994bdf27bafe       etcd-ha-628553
	ed82ce8b3bcd3       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            1                   ba58486de78c0       kube-apiserver-ha-628553
	95fb6227eb362       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   7ed281d4e14b4       kube-scheduler-ha-628553
	
	
	==> coredns [686ec3666316198103acd7321ee499a1c56d87b82aa2872b2c455d2d56d79c00] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42442 - 18766 "HINFO IN 4571394630518627788.6445406735941127243. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029451369s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[280398106]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.760) (total time: 30005ms):
	Trace[280398106]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (12:23:11.765)
	Trace[280398106]: [30.00544298s] [30.00544298s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[608973468]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.764) (total time: 30001ms):
	Trace[608973468]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:23:11.766)
	Trace[608973468]: [30.001242765s] [30.001242765s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[658286387]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.757) (total time: 30008ms):
	Trace[658286387]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30008ms (12:23:11.766)
	Trace[658286387]: [30.008700424s] [30.008700424s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [baa8694f118c36b612561775e109d541e2a915f312d37d6e3be467a057106e52] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:35509 - 48584 "HINFO IN 8020388657913662547.1460102013465811788. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030040071s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1592772175]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.757) (total time: 30005ms):
	Trace[1592772175]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (12:23:11.762)
	Trace[1592772175]: [30.005216016s] [30.005216016s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[381370772]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.761) (total time: 30002ms):
	Trace[381370772]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (12:23:11.764)
	Trace[381370772]: [30.002479951s] [30.002479951s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[770192097]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.762) (total time: 30002ms):
	Trace[770192097]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (12:23:11.764)
	Trace[770192097]: [30.002450665s] [30.002450665s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-628553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_07_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:07:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:24:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:22:56 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:22:56 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:22:56 +0000   Mon, 07 Oct 2024 12:07:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:22:56 +0000   Mon, 07 Oct 2024 12:07:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    ha-628553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a13f7b7982a74b9eb8f82488f9c3d1a6
	  System UUID:                a13f7b79-82a7-4b9e-b8f8-2488f9c3d1a6
	  Boot ID:                    bd90a803-e822-4ebc-9e14-1cf5ab6bd21a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vc5k8              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-ktmzq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-rsr6v             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-628553                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-snf5v                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-628553             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-628553    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-h6vg8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-628553             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-628553                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 99s                    kube-proxy       
	  Normal  Starting                 16m                    kube-proxy       
	  Normal  Starting                 16m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                    kubelet          Node ha-628553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     16m                    kubelet          Node ha-628553 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    16m                    kubelet          Node ha-628553 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           16m                    node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  NodeReady                16m                    kubelet          Node ha-628553 status is now: NodeReady
	  Normal  RegisteredNode           15m                    node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  Starting                 2m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node ha-628553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node ha-628553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m32s (x7 over 2m32s)  kubelet          Node ha-628553 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           112s                   node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  RegisteredNode           107s                   node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	  Normal  RegisteredNode           50s                    node-controller  Node ha-628553 event: Registered Node ha-628553 in Controller
	
	
	Name:               ha-628553-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_08_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:08:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:24:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:22:48 +0000   Mon, 07 Oct 2024 12:22:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:22:48 +0000   Mon, 07 Oct 2024 12:22:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:22:48 +0000   Mon, 07 Oct 2024 12:22:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:22:48 +0000   Mon, 07 Oct 2024 12:22:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.169
	  Hostname:    ha-628553-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ba9ae7572f54f4ab8de307b6e86da52
	  System UUID:                4ba9ae75-72f5-4f4a-b8de-307b6e86da52
	  Boot ID:                    40562cac-0e07-49bd-a1a9-5935d586d525
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-75ng4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     busybox-7dff88458-jhmrp                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-628553-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-9rq2w                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-628553-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-628553-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-s5c6d                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-628553-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-628553-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 15m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)    kubelet          Node ha-628553-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)    kubelet          Node ha-628553-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)    kubelet          Node ha-628553-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           15m                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           14m                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  NodeNotReady             12m                  node-controller  Node ha-628553-m02 status is now: NodeNotReady
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node ha-628553-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node ha-628553-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x7 over 2m9s)  kubelet          Node ha-628553-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           112s                 node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           107s                 node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	  Normal  RegisteredNode           50s                  node-controller  Node ha-628553-m02 event: Registered Node ha-628553-m02 in Controller
	
	
	Name:               ha-628553-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_09_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:09:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:24:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:23:42 +0000   Mon, 07 Oct 2024 12:23:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:23:42 +0000   Mon, 07 Oct 2024 12:23:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:23:42 +0000   Mon, 07 Oct 2024 12:23:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:23:42 +0000   Mon, 07 Oct 2024 12:23:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.149
	  Hostname:    ha-628553-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aab92960db1b4070940c89c6ff930351
	  System UUID:                aab92960-db1b-4070-940c-89c6ff930351
	  Boot ID:                    7dc53db3-d20e-492d-ba1c-daa7c7a9df41
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-628553-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-sb4xd                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-628553-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-628553-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-956k4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-628553-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-628553-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14m                kube-proxy       
	  Normal   Starting                 54s                kube-proxy       
	  Normal   RegisteredNode           14m                node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node ha-628553-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           14m                node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal   RegisteredNode           112s               node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal   RegisteredNode           107s               node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	  Normal   NodeNotReady             72s                node-controller  Node ha-628553-m03 status is now: NodeNotReady
	  Normal   Starting                 70s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  70s (x2 over 70s)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x2 over 70s)  kubelet          Node ha-628553-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x2 over 70s)  kubelet          Node ha-628553-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 70s                kubelet          Node ha-628553-m03 has been rebooted, boot id: 7dc53db3-d20e-492d-ba1c-daa7c7a9df41
	  Normal   NodeReady                70s                kubelet          Node ha-628553-m03 status is now: NodeReady
	  Normal   RegisteredNode           50s                node-controller  Node ha-628553-m03 event: Registered Node ha-628553-m03 in Controller
	
	
	Name:               ha-628553-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-628553-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=ha-628553
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_07T12_10_42_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:10:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-628553-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:24:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:24:12 +0000   Mon, 07 Oct 2024 12:24:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:24:12 +0000   Mon, 07 Oct 2024 12:24:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:24:12 +0000   Mon, 07 Oct 2024 12:24:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:24:12 +0000   Mon, 07 Oct 2024 12:24:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.119
	  Hostname:    ha-628553-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b7e249f18a3f466abcbb6b94b02ed2ec
	  System UUID:                b7e249f1-8a3f-466a-bcbb-6b94b02ed2ec
	  Boot ID:                    2595e0c8-36a9-4e59-9b95-b15d454677d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rwk2r       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-fkzqr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-628553-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-628553-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-628553-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal   NodeReady                13m                kubelet          Node ha-628553-m04 status is now: NodeReady
	  Normal   RegisteredNode           112s               node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal   RegisteredNode           107s               node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal   NodeNotReady             72s                node-controller  Node ha-628553-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           50s                node-controller  Node ha-628553-m04 event: Registered Node ha-628553-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s                 kubelet          Node ha-628553-m04 has been rebooted, boot id: 2595e0c8-36a9-4e59-9b95-b15d454677d7
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-628553-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-628553-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-628553-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                9s                 kubelet          Node ha-628553-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 7 12:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051426] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039314] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.872196] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.731407] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.642671] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.201157] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.059612] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060379] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.204754] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.116063] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +0.296542] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +4.144385] systemd-fstab-generator[1043]: Ignoring "noauto" option for root device
	[  +0.347991] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.519719] kauditd_printk_skb: 1 callbacks suppressed
	[Oct 7 12:22] kauditd_printk_skb: 40 callbacks suppressed
	[Oct 7 12:23] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [f0bcf62f683c219b741bda686149685e6169c8c1cbaa701e6ed54e473f53abac] <==
	{"level":"warn","ts":"2024-10-07T12:23:04.591158Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9d3f3bde44d498b8","error":"Get \"https://192.168.39.149:2380/version\": dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:06.621215Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9d3f3bde44d498b8","rtt":"0s","error":"dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:06.621349Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9d3f3bde44d498b8","rtt":"0s","error":"dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:08.593581Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.149:2380/version","remote-member-id":"9d3f3bde44d498b8","error":"Get \"https://192.168.39.149:2380/version\": dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:08.593763Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9d3f3bde44d498b8","error":"Get \"https://192.168.39.149:2380/version\": dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:11.621392Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9d3f3bde44d498b8","rtt":"0s","error":"dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:11.621598Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9d3f3bde44d498b8","rtt":"0s","error":"dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:12.596313Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.149:2380/version","remote-member-id":"9d3f3bde44d498b8","error":"Get \"https://192.168.39.149:2380/version\": dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:12.596460Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9d3f3bde44d498b8","error":"Get \"https://192.168.39.149:2380/version\": dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:16.599172Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.149:2380/version","remote-member-id":"9d3f3bde44d498b8","error":"Get \"https://192.168.39.149:2380/version\": dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:16.599326Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9d3f3bde44d498b8","error":"Get \"https://192.168.39.149:2380/version\": dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:16.622258Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9d3f3bde44d498b8","rtt":"0s","error":"dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:16.622390Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9d3f3bde44d498b8","rtt":"0s","error":"dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:20.602518Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.149:2380/version","remote-member-id":"9d3f3bde44d498b8","error":"Get \"https://192.168.39.149:2380/version\": dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:20.602564Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9d3f3bde44d498b8","error":"Get \"https://192.168.39.149:2380/version\": dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:21.622898Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"9d3f3bde44d498b8","rtt":"0s","error":"dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:21.623042Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"9d3f3bde44d498b8","rtt":"0s","error":"dial tcp 192.168.39.149:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-10-07T12:23:22.798881Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.512021ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11932448183492844855 > lease_revoke:<id:25989266edd89c15>","response":"size:29"}
	{"level":"info","ts":"2024-10-07T12:23:24.417052Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"9d3f3bde44d498b8"}
	{"level":"info","ts":"2024-10-07T12:23:24.417384Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fbb007bab925a598","remote-peer-id":"9d3f3bde44d498b8"}
	{"level":"info","ts":"2024-10-07T12:23:24.417507Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fbb007bab925a598","remote-peer-id":"9d3f3bde44d498b8"}
	{"level":"info","ts":"2024-10-07T12:23:24.426531Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fbb007bab925a598","to":"9d3f3bde44d498b8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-10-07T12:23:24.427739Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fbb007bab925a598","remote-peer-id":"9d3f3bde44d498b8"}
	{"level":"info","ts":"2024-10-07T12:23:24.444621Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fbb007bab925a598","to":"9d3f3bde44d498b8","stream-type":"stream Message"}
	{"level":"info","ts":"2024-10-07T12:23:24.445249Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fbb007bab925a598","remote-peer-id":"9d3f3bde44d498b8"}
	
	
	==> kernel <==
	 12:24:21 up 2 min,  0 users,  load average: 0.41, 0.30, 0.12
	Linux ha-628553 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d3b22cd52cf94f58f06a2f709e80ba61098a2e7fdfb76f690099d678912f9b19] <==
	I1007 12:23:42.852748       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:23:52.858351       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:23:52.858399       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:23:52.858546       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:23:52.858572       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:23:52.858615       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:23:52.858621       1 main.go:299] handling current node
	I1007 12:23:52.858636       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:23:52.858640       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:24:02.859360       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:24:02.859512       1 main.go:299] handling current node
	I1007 12:24:02.859543       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:24:02.859562       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:24:02.859810       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:24:02.859854       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:24:02.859927       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:24:02.859946       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:24:12.851190       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:24:12.851458       1 main.go:299] handling current node
	I1007 12:24:12.851549       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:24:12.851575       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:24:12.852053       1 main.go:295] Handling node with IPs: map[192.168.39.149:{}]
	I1007 12:24:12.852113       1 main.go:322] Node ha-628553-m03 has CIDR [10.244.2.0/24] 
	I1007 12:24:12.852181       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:24:12.852200       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [e9e8270b7e13c41f1067c7a2b2c48735878a1ac270029bf9bc40d0cf539e6ab4] <==
	I1007 12:22:25.840993       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1007 12:22:25.841143       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1007 12:22:25.936478       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1007 12:22:25.936527       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1007 12:22:25.942946       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1007 12:22:25.948903       1 shared_informer.go:320] Caches are synced for configmaps
	I1007 12:22:25.949000       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1007 12:22:25.949039       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1007 12:22:25.955843       1 aggregator.go:171] initial CRD sync complete...
	I1007 12:22:25.957709       1 autoregister_controller.go:144] Starting autoregister controller
	I1007 12:22:25.957766       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1007 12:22:25.957792       1 cache.go:39] Caches are synced for autoregister controller
	I1007 12:22:25.958256       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1007 12:22:25.969338       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1007 12:22:25.975341       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1007 12:22:25.975475       1 policy_source.go:224] refreshing policies
	W1007 12:22:25.999899       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.149]
	I1007 12:22:26.001205       1 controller.go:615] quota admission added evaluator for: endpoints
	I1007 12:22:26.011150       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1007 12:22:26.015113       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1007 12:22:26.029460       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1007 12:22:26.030162       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1007 12:22:26.076205       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 12:22:26.841508       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1007 12:22:27.233351       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.110 192.168.39.149]
	
	
	==> kube-apiserver [ed82ce8b3bcd3f7d7e0d03d070c770ca1bd35d0d60c615b2ae6d0cf80b7d2c16] <==
	I1007 12:21:56.404908       1 options.go:228] external host was not specified, using 192.168.39.110
	I1007 12:21:56.409148       1 server.go:142] Version: v1.31.1
	I1007 12:21:56.409336       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:21:58.254748       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1007 12:21:58.272453       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1007 12:21:58.279631       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1007 12:21:58.279725       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1007 12:21:58.279975       1 instance.go:232] Using reconciler: lease
	W1007 12:22:18.256346       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1007 12:22:18.256875       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1007 12:22:18.281743       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1007 12:22:18.281901       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [225f3bec737bfa85e116bde94fa78421a9c6ed6155b02d6687349eded1294e2c] <==
	I1007 12:22:43.616057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="74.328µs"
	I1007 12:22:48.034893       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m02"
	I1007 12:22:56.862607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553"
	I1007 12:23:09.512131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:23:09.515771       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m03"
	I1007 12:23:09.558465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m03"
	I1007 12:23:09.558717       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	E1007 12:23:09.633271       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"de99ce27-0861-4082-b41c-9f3fda6e81be\", ResourceVersion:\"2035\", Generation:1, CreationTimestamp:time.Date(2024, time.October, 7, 12, 7, 28, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\"
,\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20240813-c6f155d6\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\
":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc002660960), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"
\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0027bea38), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeC
laimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0027bea50), EmptyDir:(*v1.EmptyDirVolumeSource)(
nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxV
olumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0027bea68), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Azu
reFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20240813-c6f155d6\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc0026609c0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSo
urce)(0xc002660a00)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false
, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0029d9a40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralConta
iner(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc0029c98f0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028fcb00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Ov
erhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002a930c0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0029c992c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1007 12:23:11.616232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m03"
	I1007 12:23:11.643246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m03"
	I1007 12:23:14.469100       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:23:20.638772       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-th42x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-th42x\": the object has been modified; please apply your changes to the latest version and try again"
	I1007 12:23:20.639879       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"1fc6540e-4c8f-4db1-9900-bc14b0d460f5", APIVersion:"v1", ResourceVersion:"239", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-th42x EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-th42x": the object has been modified; please apply your changes to the latest version and try again
	I1007 12:23:20.713817       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="107.314742ms"
	E1007 12:23:20.713869       1 replica_set.go:560] "Unhandled Error" err="sync \"kube-system/coredns-7c65d6cfc9\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-7c65d6cfc9\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1007 12:23:20.715830       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="211.348µs"
	I1007 12:23:20.720815       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.921µs"
	I1007 12:23:31.963090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:23:32.043269       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:23:42.185471       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m03"
	I1007 12:24:12.503504       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:24:12.503762       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-628553-m04"
	I1007 12:24:12.529393       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	I1007 12:24:14.416829       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-628553-m04"
	
	
	==> kube-controller-manager [f0a6976c1286a07048e803e2a844dc480948730521d22549e3eb0f742fbccc91] <==
	I1007 12:21:57.348998       1 serving.go:386] Generated self-signed cert in-memory
	I1007 12:21:58.168523       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1007 12:21:58.171722       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:21:58.173765       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1007 12:21:58.175148       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1007 12:21:58.175214       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1007 12:21:58.175302       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1007 12:22:25.846141       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [e98b7814517854b690eef4baa06ba056aa5af0f6fad15d83fb160d2962677836] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1007 12:22:41.904334       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1007 12:22:41.931688       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.110"]
	E1007 12:22:41.931966       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:22:41.973281       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1007 12:22:41.973339       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1007 12:22:41.973364       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:22:41.978334       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:22:41.979183       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:22:41.979215       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:22:41.981080       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:22:41.981585       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:22:41.981719       1 config.go:199] "Starting service config controller"
	I1007 12:22:41.981769       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:22:41.982449       1 config.go:328] "Starting node config controller"
	I1007 12:22:41.982473       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:22:42.082265       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:22:42.082326       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 12:22:42.082510       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [95fb6227eb362f9d9b97a269451c541ab7c49c72f67128bee5659d44d441d54d] <==
	I1007 12:21:57.440573       1 serving.go:386] Generated self-signed cert in-memory
	W1007 12:22:08.296812       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.39.110:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W1007 12:22:08.296895       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1007 12:22:08.296905       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1007 12:22:25.878023       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1007 12:22:25.878089       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:22:25.887306       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1007 12:22:25.886960       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1007 12:22:25.890274       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 12:22:25.887530       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1007 12:22:26.092500       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 12:22:49 ha-628553 kubelet[1050]: E1007 12:22:49.322236    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303769321781663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:22:59 ha-628553 kubelet[1050]: E1007 12:22:59.324146    1050 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303779323780397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:22:59 ha-628553 kubelet[1050]: E1007 12:22:59.324528    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303779323780397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:09 ha-628553 kubelet[1050]: E1007 12:23:09.326028    1050 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303789325595005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:09 ha-628553 kubelet[1050]: E1007 12:23:09.326419    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303789325595005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:11 ha-628553 kubelet[1050]: I1007 12:23:11.710810    1050 scope.go:117] "RemoveContainer" containerID="9bda594b13b22c3f3f156b468934651bfa4e3d35962aa8b65ccb85b2db3385e7"
	Oct 07 12:23:19 ha-628553 kubelet[1050]: E1007 12:23:19.331252    1050 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303799330880275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:19 ha-628553 kubelet[1050]: E1007 12:23:19.331828    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303799330880275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:29 ha-628553 kubelet[1050]: E1007 12:23:29.336515    1050 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303809335875447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:29 ha-628553 kubelet[1050]: E1007 12:23:29.336565    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303809335875447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:39 ha-628553 kubelet[1050]: E1007 12:23:39.338122    1050 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303819337590682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:39 ha-628553 kubelet[1050]: E1007 12:23:39.338161    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303819337590682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:49 ha-628553 kubelet[1050]: E1007 12:23:49.325029    1050 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 07 12:23:49 ha-628553 kubelet[1050]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 07 12:23:49 ha-628553 kubelet[1050]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 07 12:23:49 ha-628553 kubelet[1050]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 07 12:23:49 ha-628553 kubelet[1050]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 07 12:23:49 ha-628553 kubelet[1050]: E1007 12:23:49.340151    1050 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303829339392596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:49 ha-628553 kubelet[1050]: E1007 12:23:49.340179    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303829339392596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:59 ha-628553 kubelet[1050]: E1007 12:23:59.342686    1050 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303839342304637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:23:59 ha-628553 kubelet[1050]: E1007 12:23:59.343191    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303839342304637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:24:09 ha-628553 kubelet[1050]: E1007 12:24:09.345083    1050 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303849344727907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:24:09 ha-628553 kubelet[1050]: E1007 12:24:09.345365    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303849344727907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:24:19 ha-628553 kubelet[1050]: E1007 12:24:19.347346    1050 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303859346981274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:24:19 ha-628553 kubelet[1050]: E1007 12:24:19.347394    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728303859346981274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-628553 -n ha-628553
helpers_test.go:261: (dbg) Run:  kubectl --context ha-628553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (619.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (173.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 stop -v=7 --alsologtostderr
E1007 12:25:01.380975  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-628553 stop -v=7 --alsologtostderr: exit status 82 (2m1.921941665s)

                                                
                                                
-- stdout --
	* Stopping node "ha-628553-m04"  ...
	* Stopping node "ha-628553-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:24:37.250412  410447 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:24:37.250560  410447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:24:37.250574  410447 out.go:358] Setting ErrFile to fd 2...
	I1007 12:24:37.250580  410447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:24:37.250810  410447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:24:37.251128  410447 out.go:352] Setting JSON to false
	I1007 12:24:37.251238  410447 mustload.go:65] Loading cluster: ha-628553
	I1007 12:24:37.251822  410447 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:24:37.251928  410447 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:24:37.252108  410447 mustload.go:65] Loading cluster: ha-628553
	I1007 12:24:37.252236  410447 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:24:37.252263  410447 stop.go:39] StopHost: ha-628553-m04
	I1007 12:24:37.252667  410447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:24:37.252713  410447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:24:37.269492  410447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I1007 12:24:37.270088  410447 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:24:37.271053  410447 main.go:141] libmachine: Using API Version  1
	I1007 12:24:37.271082  410447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:24:37.271588  410447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:24:37.274245  410447 out.go:177] * Stopping node "ha-628553-m04"  ...
	I1007 12:24:37.275697  410447 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 12:24:37.275743  410447 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:37.276069  410447 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 12:24:37.276109  410447 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:37.279616  410447 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:37.280124  410447 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:37.280166  410447 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:37.280360  410447 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:37.280568  410447 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:37.280748  410447 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:37.280931  410447 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:24:37.371593  410447 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 12:24:37.426743  410447 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 12:24:37.482434  410447 main.go:141] libmachine: Stopping "ha-628553-m04"...
	I1007 12:24:37.482491  410447 main.go:141] libmachine: (ha-628553-m04) Calling .GetState
	I1007 12:24:37.484157  410447 main.go:141] libmachine: (ha-628553-m04) Calling .Stop
	I1007 12:24:37.488319  410447 main.go:141] libmachine: (ha-628553-m04) Waiting for machine to stop 0/120
	I1007 12:24:38.673938  410447 main.go:141] libmachine: (ha-628553-m04) Calling .GetState
	I1007 12:24:38.675218  410447 main.go:141] libmachine: Machine "ha-628553-m04" was stopped.
	I1007 12:24:38.675236  410447 stop.go:75] duration metric: took 1.399546251s to stop
	I1007 12:24:38.675272  410447 stop.go:39] StopHost: ha-628553-m02
	I1007 12:24:38.675587  410447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:24:38.675638  410447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:24:38.690555  410447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I1007 12:24:38.691076  410447 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:24:38.691663  410447 main.go:141] libmachine: Using API Version  1
	I1007 12:24:38.691686  410447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:24:38.692001  410447 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:24:38.693937  410447 out.go:177] * Stopping node "ha-628553-m02"  ...
	I1007 12:24:38.695125  410447 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1007 12:24:38.695161  410447 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:24:38.695395  410447 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1007 12:24:38.695419  410447 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:24:38.698756  410447 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:24:38.699336  410447 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:24:38.699368  410447 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:24:38.699590  410447 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:24:38.699780  410447 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:24:38.699922  410447 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:24:38.700065  410447 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:24:38.790780  410447 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1007 12:24:38.844693  410447 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1007 12:24:38.898642  410447 main.go:141] libmachine: Stopping "ha-628553-m02"...
	I1007 12:24:38.898672  410447 main.go:141] libmachine: (ha-628553-m02) Calling .GetState
	I1007 12:24:38.900370  410447 main.go:141] libmachine: (ha-628553-m02) Calling .Stop
	I1007 12:24:38.903858  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 0/120
	I1007 12:24:39.905292  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 1/120
	I1007 12:24:40.906759  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 2/120
	I1007 12:24:41.908154  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 3/120
	I1007 12:24:42.909943  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 4/120
	I1007 12:24:43.912021  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 5/120
	I1007 12:24:44.913965  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 6/120
	I1007 12:24:45.915748  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 7/120
	I1007 12:24:46.917878  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 8/120
	I1007 12:24:47.919569  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 9/120
	I1007 12:24:48.921543  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 10/120
	I1007 12:24:49.923951  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 11/120
	I1007 12:24:50.925449  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 12/120
	I1007 12:24:51.928311  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 13/120
	I1007 12:24:52.929929  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 14/120
	I1007 12:24:53.932289  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 15/120
	I1007 12:24:54.934194  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 16/120
	I1007 12:24:55.935740  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 17/120
	I1007 12:24:56.937673  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 18/120
	I1007 12:24:57.939171  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 19/120
	I1007 12:24:58.941055  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 20/120
	I1007 12:24:59.943509  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 21/120
	I1007 12:25:00.945119  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 22/120
	I1007 12:25:01.946629  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 23/120
	I1007 12:25:02.948004  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 24/120
	I1007 12:25:03.950352  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 25/120
	I1007 12:25:04.952027  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 26/120
	I1007 12:25:05.953568  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 27/120
	I1007 12:25:06.955115  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 28/120
	I1007 12:25:07.956830  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 29/120
	I1007 12:25:08.958783  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 30/120
	I1007 12:25:09.960573  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 31/120
	I1007 12:25:10.962224  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 32/120
	I1007 12:25:11.964105  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 33/120
	I1007 12:25:12.966051  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 34/120
	I1007 12:25:13.968032  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 35/120
	I1007 12:25:14.969591  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 36/120
	I1007 12:25:15.971018  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 37/120
	I1007 12:25:16.972777  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 38/120
	I1007 12:25:17.974419  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 39/120
	I1007 12:25:18.976721  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 40/120
	I1007 12:25:19.978528  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 41/120
	I1007 12:25:20.980114  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 42/120
	I1007 12:25:21.981632  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 43/120
	I1007 12:25:22.983223  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 44/120
	I1007 12:25:23.985208  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 45/120
	I1007 12:25:24.986644  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 46/120
	I1007 12:25:25.988354  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 47/120
	I1007 12:25:26.989821  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 48/120
	I1007 12:25:27.992297  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 49/120
	I1007 12:25:28.994532  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 50/120
	I1007 12:25:29.995979  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 51/120
	I1007 12:25:30.997684  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 52/120
	I1007 12:25:31.998948  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 53/120
	I1007 12:25:33.001108  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 54/120
	I1007 12:25:34.003233  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 55/120
	I1007 12:25:35.004445  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 56/120
	I1007 12:25:36.006017  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 57/120
	I1007 12:25:37.007598  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 58/120
	I1007 12:25:38.009327  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 59/120
	I1007 12:25:39.011212  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 60/120
	I1007 12:25:40.012619  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 61/120
	I1007 12:25:41.014134  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 62/120
	I1007 12:25:42.015486  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 63/120
	I1007 12:25:43.016885  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 64/120
	I1007 12:25:44.019231  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 65/120
	I1007 12:25:45.020870  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 66/120
	I1007 12:25:46.022447  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 67/120
	I1007 12:25:47.024199  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 68/120
	I1007 12:25:48.025818  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 69/120
	I1007 12:25:49.027904  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 70/120
	I1007 12:25:50.029594  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 71/120
	I1007 12:25:51.031536  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 72/120
	I1007 12:25:52.033705  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 73/120
	I1007 12:25:53.035390  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 74/120
	I1007 12:25:54.037218  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 75/120
	I1007 12:25:55.038882  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 76/120
	I1007 12:25:56.040279  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 77/120
	I1007 12:25:57.041989  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 78/120
	I1007 12:25:58.043529  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 79/120
	I1007 12:25:59.045708  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 80/120
	I1007 12:26:00.047291  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 81/120
	I1007 12:26:01.049830  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 82/120
	I1007 12:26:02.051205  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 83/120
	I1007 12:26:03.052564  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 84/120
	I1007 12:26:04.054578  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 85/120
	I1007 12:26:05.056115  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 86/120
	I1007 12:26:06.057613  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 87/120
	I1007 12:26:07.059214  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 88/120
	I1007 12:26:08.061601  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 89/120
	I1007 12:26:09.063727  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 90/120
	I1007 12:26:10.065688  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 91/120
	I1007 12:26:11.067174  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 92/120
	I1007 12:26:12.068753  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 93/120
	I1007 12:26:13.070358  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 94/120
	I1007 12:26:14.072265  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 95/120
	I1007 12:26:15.073792  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 96/120
	I1007 12:26:16.075238  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 97/120
	I1007 12:26:17.076700  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 98/120
	I1007 12:26:18.078149  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 99/120
	I1007 12:26:19.079829  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 100/120
	I1007 12:26:20.081697  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 101/120
	I1007 12:26:21.083268  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 102/120
	I1007 12:26:22.084872  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 103/120
	I1007 12:26:23.086153  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 104/120
	I1007 12:26:24.087924  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 105/120
	I1007 12:26:25.089644  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 106/120
	I1007 12:26:26.090854  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 107/120
	I1007 12:26:27.093069  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 108/120
	I1007 12:26:28.094473  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 109/120
	I1007 12:26:29.096341  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 110/120
	I1007 12:26:30.097763  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 111/120
	I1007 12:26:31.099349  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 112/120
	I1007 12:26:32.100991  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 113/120
	I1007 12:26:33.102579  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 114/120
	I1007 12:26:34.104640  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 115/120
	I1007 12:26:35.106019  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 116/120
	I1007 12:26:36.107596  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 117/120
	I1007 12:26:37.108959  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 118/120
	I1007 12:26:38.110522  410447 main.go:141] libmachine: (ha-628553-m02) Waiting for machine to stop 119/120
	I1007 12:26:39.111565  410447 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1007 12:26:39.111653  410447 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1007 12:26:39.113505  410447 out.go:201] 
	W1007 12:26:39.115024  410447 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1007 12:26:39.115040  410447 out.go:270] * 
	* 
	W1007 12:26:39.118271  410447 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 12:26:39.119601  410447 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-628553 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr
E1007 12:26:42.462391  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr: (34.213627855s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-628553 -n ha-628553
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-628553 -n ha-628553: exit status 2 (15.616007956s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-628553 logs -n 25: (1.435636693s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m04 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp testdata/cp-test.txt                                                | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553:/home/docker/cp-test_ha-628553-m04_ha-628553.txt                       |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553 sudo cat                                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553.txt                                 |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m02:/home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m02 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m03:/home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n                                                                 | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | ha-628553-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-628553 ssh -n ha-628553-m03 sudo cat                                          | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC | 07 Oct 24 12:11 UTC |
	|         | /home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-628553 node stop m02 -v=7                                                     | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-628553 node start m02 -v=7                                                    | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:13 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-628553 -v=7                                                           | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-628553 -v=7                                                                | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-628553 --wait=true -v=7                                                    | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:16 UTC | 07 Oct 24 12:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-628553                                                                | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC |                     |
	| node    | ha-628553 node delete m03 -v=7                                                   | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC | 07 Oct 24 12:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-628553 stop -v=7                                                              | ha-628553 | jenkins | v1.34.0 | 07 Oct 24 12:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:16:26
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:16:26.123757  407433 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:16:26.123885  407433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:16:26.123894  407433 out.go:358] Setting ErrFile to fd 2...
	I1007 12:16:26.123899  407433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:16:26.124099  407433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:16:26.124687  407433 out.go:352] Setting JSON to false
	I1007 12:16:26.125704  407433 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7132,"bootTime":1728296254,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:16:26.125769  407433 start.go:139] virtualization: kvm guest
	I1007 12:16:26.128261  407433 out.go:177] * [ha-628553] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:16:26.129631  407433 notify.go:220] Checking for updates...
	I1007 12:16:26.129690  407433 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:16:26.131194  407433 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:16:26.132881  407433 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:16:26.134204  407433 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:16:26.135537  407433 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:16:26.136781  407433 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:16:26.138675  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:16:26.138806  407433 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:16:26.139340  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:16:26.139398  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:16:26.155992  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41965
	I1007 12:16:26.156513  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:16:26.157038  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:16:26.157059  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:16:26.157404  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:16:26.157605  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:16:26.193278  407433 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 12:16:26.194596  407433 start.go:297] selected driver: kvm2
	I1007 12:16:26.194609  407433 start.go:901] validating driver "kvm2" against &{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false
efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:16:26.194734  407433 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:16:26.195065  407433 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:16:26.195142  407433 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:16:26.210263  407433 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:16:26.210923  407433 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:16:26.210980  407433 cni.go:84] Creating CNI manager for ""
	I1007 12:16:26.211057  407433 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 12:16:26.211117  407433 start.go:340] cluster config:
	{Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.3
9.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacce
l:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:16:26.211269  407433 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:16:26.213096  407433 out.go:177] * Starting "ha-628553" primary control-plane node in "ha-628553" cluster
	I1007 12:16:26.214271  407433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:16:26.214313  407433 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:16:26.214324  407433 cache.go:56] Caching tarball of preloaded images
	I1007 12:16:26.214415  407433 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:16:26.214425  407433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:16:26.214536  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:16:26.214713  407433 start.go:360] acquireMachinesLock for ha-628553: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:16:26.214754  407433 start.go:364] duration metric: took 22.976µs to acquireMachinesLock for "ha-628553"
	I1007 12:16:26.214769  407433 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:16:26.214776  407433 fix.go:54] fixHost starting: 
	I1007 12:16:26.215091  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:16:26.215129  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:16:26.229648  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41911
	I1007 12:16:26.230107  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:16:26.230606  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:16:26.230627  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:16:26.230939  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:16:26.231168  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:16:26.231307  407433 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:16:26.232790  407433 fix.go:112] recreateIfNeeded on ha-628553: state=Running err=<nil>
	W1007 12:16:26.232814  407433 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:16:26.235018  407433 out.go:177] * Updating the running kvm2 "ha-628553" VM ...
	I1007 12:16:26.236377  407433 machine.go:93] provisionDockerMachine start ...
	I1007 12:16:26.236397  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:16:26.236609  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:16:26.239043  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:16:26.239559  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:07:01 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:16:26.239609  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:16:26.239720  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:16:26.239947  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:16:26.240108  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:16:26.240247  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:16:26.240401  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:16:26.240603  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:16:26.240614  407433 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:16:44.599314  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:16:50.679247  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:16:53.751394  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:16:59.831258  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:02.903378  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:08.983267  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:12.055350  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:18.135276  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:21.207331  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:27.287338  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:33.367275  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:36.439285  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:42.519255  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:45.591262  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:51.671323  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:17:54.743312  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:00.823290  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:03.895331  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:09.975294  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:13.047432  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:19.127244  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:22.199315  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:28.279347  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:31.351313  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:37.431267  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:40.503281  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:46.583296  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:49.655299  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:55.735286  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:18:58.807398  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:04.887363  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:07.959313  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:14.039298  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:17.111274  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:23.191254  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:26.263229  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:32.343289  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:35.415281  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:41.495251  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:44.567273  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:50.647311  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:53.719310  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:19:59.799336  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:02.871340  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:08.951312  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:12.023252  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:18.103270  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:21.175326  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:27.255340  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:30.327259  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:36.407258  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:39.479362  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:45.559291  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:48.631374  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:54.711275  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:20:57.783299  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:21:03.863249  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:21:06.935316  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:21:13.015272  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:21:16.087303  407433 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.110:22: connect: no route to host
	I1007 12:21:19.090010  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:21:19.090081  407433 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:21:19.090436  407433 buildroot.go:166] provisioning hostname "ha-628553"
	I1007 12:21:19.090476  407433 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:21:19.090712  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:19.092889  407433 machine.go:96] duration metric: took 4m52.856494555s to provisionDockerMachine
	I1007 12:21:19.092936  407433 fix.go:56] duration metric: took 4m52.878159598s for fixHost
	I1007 12:21:19.092942  407433 start.go:83] releasing machines lock for "ha-628553", held for 4m52.878179978s
	W1007 12:21:19.092959  407433 start.go:714] error starting host: provision: host is not running
	W1007 12:21:19.093084  407433 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1007 12:21:19.093093  407433 start.go:729] Will try again in 5 seconds ...
	I1007 12:21:24.095416  407433 start.go:360] acquireMachinesLock for ha-628553: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:21:24.095566  407433 start.go:364] duration metric: took 81.063µs to acquireMachinesLock for "ha-628553"
	I1007 12:21:24.095604  407433 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:21:24.095613  407433 fix.go:54] fixHost starting: 
	I1007 12:21:24.095992  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:21:24.096023  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:21:24.112503  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I1007 12:21:24.113085  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:21:24.113729  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:21:24.113752  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:21:24.114103  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:21:24.114310  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:24.114471  407433 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:21:24.116362  407433 fix.go:112] recreateIfNeeded on ha-628553: state=Stopped err=<nil>
	I1007 12:21:24.116387  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	W1007 12:21:24.116572  407433 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:21:24.119518  407433 out.go:177] * Restarting existing kvm2 VM for "ha-628553" ...
	I1007 12:21:24.121193  407433 main.go:141] libmachine: (ha-628553) Calling .Start
	I1007 12:21:24.121531  407433 main.go:141] libmachine: (ha-628553) Ensuring networks are active...
	I1007 12:21:24.122685  407433 main.go:141] libmachine: (ha-628553) Ensuring network default is active
	I1007 12:21:24.123229  407433 main.go:141] libmachine: (ha-628553) Ensuring network mk-ha-628553 is active
	I1007 12:21:24.123712  407433 main.go:141] libmachine: (ha-628553) Getting domain xml...
	I1007 12:21:24.124530  407433 main.go:141] libmachine: (ha-628553) Creating domain...
	I1007 12:21:25.367026  407433 main.go:141] libmachine: (ha-628553) Waiting to get IP...
	I1007 12:21:25.368097  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:25.368533  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:25.368608  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:25.368510  408877 retry.go:31] will retry after 279.419429ms: waiting for machine to come up
	I1007 12:21:25.650333  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:25.650773  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:25.650798  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:25.650724  408877 retry.go:31] will retry after 283.251799ms: waiting for machine to come up
	I1007 12:21:25.935196  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:25.935605  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:25.935630  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:25.935555  408877 retry.go:31] will retry after 476.147073ms: waiting for machine to come up
	I1007 12:21:26.413173  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:26.413522  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:26.413551  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:26.413509  408877 retry.go:31] will retry after 398.750079ms: waiting for machine to come up
	I1007 12:21:26.814134  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:26.814547  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:26.814577  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:26.814483  408877 retry.go:31] will retry after 616.527868ms: waiting for machine to come up
	I1007 12:21:27.432565  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:27.433095  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:27.433129  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:27.433033  408877 retry.go:31] will retry after 906.153026ms: waiting for machine to come up
	I1007 12:21:28.341150  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:28.341606  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:28.341641  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:28.341511  408877 retry.go:31] will retry after 1.022594433s: waiting for machine to come up
	I1007 12:21:29.366330  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:29.366748  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:29.366770  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:29.366714  408877 retry.go:31] will retry after 1.132267271s: waiting for machine to come up
	I1007 12:21:30.501161  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:30.501554  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:30.501590  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:30.501492  408877 retry.go:31] will retry after 1.319777065s: waiting for machine to come up
	I1007 12:21:31.823354  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:31.823800  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:31.823827  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:31.823748  408877 retry.go:31] will retry after 1.461219032s: waiting for machine to come up
	I1007 12:21:33.287405  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:33.287878  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:33.287908  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:33.287824  408877 retry.go:31] will retry after 2.368607456s: waiting for machine to come up
	I1007 12:21:35.658851  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:35.659296  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:35.659324  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:35.659255  408877 retry.go:31] will retry after 2.655568538s: waiting for machine to come up
	I1007 12:21:38.318268  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:38.318804  407433 main.go:141] libmachine: (ha-628553) DBG | unable to find current IP address of domain ha-628553 in network mk-ha-628553
	I1007 12:21:38.318831  407433 main.go:141] libmachine: (ha-628553) DBG | I1007 12:21:38.318692  408877 retry.go:31] will retry after 4.033786402s: waiting for machine to come up
	I1007 12:21:42.356645  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.357140  407433 main.go:141] libmachine: (ha-628553) Found IP for machine: 192.168.39.110
	I1007 12:21:42.357166  407433 main.go:141] libmachine: (ha-628553) Reserving static IP address...
	I1007 12:21:42.357184  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has current primary IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.357629  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "ha-628553", mac: "52:54:00:7b:12:fd", ip: "192.168.39.110"} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.357662  407433 main.go:141] libmachine: (ha-628553) DBG | skip adding static IP to network mk-ha-628553 - found existing host DHCP lease matching {name: "ha-628553", mac: "52:54:00:7b:12:fd", ip: "192.168.39.110"}
	I1007 12:21:42.357678  407433 main.go:141] libmachine: (ha-628553) Reserved static IP address: 192.168.39.110
	I1007 12:21:42.357724  407433 main.go:141] libmachine: (ha-628553) Waiting for SSH to be available...
	I1007 12:21:42.357742  407433 main.go:141] libmachine: (ha-628553) DBG | Getting to WaitForSSH function...
	I1007 12:21:42.359902  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.360251  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.360271  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.360448  407433 main.go:141] libmachine: (ha-628553) DBG | Using SSH client type: external
	I1007 12:21:42.360477  407433 main.go:141] libmachine: (ha-628553) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa (-rw-------)
	I1007 12:21:42.360512  407433 main.go:141] libmachine: (ha-628553) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:21:42.360527  407433 main.go:141] libmachine: (ha-628553) DBG | About to run SSH command:
	I1007 12:21:42.360537  407433 main.go:141] libmachine: (ha-628553) DBG | exit 0
	I1007 12:21:42.483116  407433 main.go:141] libmachine: (ha-628553) DBG | SSH cmd err, output: <nil>: 
	I1007 12:21:42.483536  407433 main.go:141] libmachine: (ha-628553) Calling .GetConfigRaw
	I1007 12:21:42.484252  407433 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:21:42.486980  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.487455  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.487480  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.487844  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:21:42.488065  407433 machine.go:93] provisionDockerMachine start ...
	I1007 12:21:42.488101  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:42.488336  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:42.490571  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.490951  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.490998  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.491066  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:42.491287  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.491435  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.491558  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:42.491740  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:21:42.491981  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:21:42.491995  407433 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:21:42.591574  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:21:42.591609  407433 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:21:42.591857  407433 buildroot.go:166] provisioning hostname "ha-628553"
	I1007 12:21:42.591888  407433 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:21:42.592065  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:42.595332  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.595848  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.595878  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.596115  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:42.596310  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.596459  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.596587  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:42.596779  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:21:42.596970  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:21:42.596985  407433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553 && echo "ha-628553" | sudo tee /etc/hostname
	I1007 12:21:42.715355  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553
	
	I1007 12:21:42.715386  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:42.718394  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.718755  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.718789  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.718953  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:42.719149  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.719306  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.719395  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:42.719539  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:21:42.719757  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:21:42.719774  407433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:21:42.829073  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:21:42.829128  407433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:21:42.829150  407433 buildroot.go:174] setting up certificates
	I1007 12:21:42.829164  407433 provision.go:84] configureAuth start
	I1007 12:21:42.829182  407433 main.go:141] libmachine: (ha-628553) Calling .GetMachineName
	I1007 12:21:42.829513  407433 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:21:42.832451  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.832765  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.832789  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.833001  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:42.835330  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.835639  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.835666  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.835776  407433 provision.go:143] copyHostCerts
	I1007 12:21:42.835829  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:21:42.835898  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:21:42.835919  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:21:42.835999  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:21:42.836099  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:21:42.836120  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:21:42.836128  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:21:42.836155  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:21:42.836210  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:21:42.836229  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:21:42.836235  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:21:42.836258  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:21:42.836323  407433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553 san=[127.0.0.1 192.168.39.110 ha-628553 localhost minikube]
	I1007 12:21:42.909733  407433 provision.go:177] copyRemoteCerts
	I1007 12:21:42.909804  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:21:42.909830  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:42.912711  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.913150  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:42.913179  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:42.913345  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:42.913555  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:42.913751  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:42.913894  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:21:42.993885  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:21:42.993979  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1007 12:21:43.019522  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:21:43.019599  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:21:43.045619  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:21:43.045708  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:21:43.071015  407433 provision.go:87] duration metric: took 241.830335ms to configureAuth
	I1007 12:21:43.071046  407433 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:21:43.071275  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:21:43.071355  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.074346  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.074687  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.074714  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.074864  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.075099  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.075285  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.075454  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.075642  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:21:43.075864  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:21:43.075882  407433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:21:43.302697  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:21:43.302739  407433 machine.go:96] duration metric: took 814.660374ms to provisionDockerMachine
	I1007 12:21:43.302758  407433 start.go:293] postStartSetup for "ha-628553" (driver="kvm2")
	I1007 12:21:43.302773  407433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:21:43.302794  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.303209  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:21:43.303247  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.305797  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.306200  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.306254  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.306414  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.306669  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.306846  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.307097  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:21:43.386822  407433 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:21:43.391301  407433 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:21:43.391356  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:21:43.391439  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:21:43.391522  407433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:21:43.391541  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:21:43.391629  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:21:43.401523  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:21:43.427243  407433 start.go:296] duration metric: took 124.440543ms for postStartSetup
	I1007 12:21:43.427311  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.427678  407433 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 12:21:43.427715  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.430704  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.431295  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.431322  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.431525  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.431734  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.431921  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.432070  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:21:43.514791  407433 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1007 12:21:43.514878  407433 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1007 12:21:43.555143  407433 fix.go:56] duration metric: took 19.459516578s for fixHost
	I1007 12:21:43.555213  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.558511  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.558893  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.558934  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.559133  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.559345  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.559558  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.559700  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.559877  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:21:43.560073  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I1007 12:21:43.560086  407433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:21:43.664129  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728303703.629612558
	
	I1007 12:21:43.664163  407433 fix.go:216] guest clock: 1728303703.629612558
	I1007 12:21:43.664176  407433 fix.go:229] Guest: 2024-10-07 12:21:43.629612558 +0000 UTC Remote: 2024-10-07 12:21:43.5551888 +0000 UTC m=+317.472624770 (delta=74.423758ms)
	I1007 12:21:43.664203  407433 fix.go:200] guest clock delta is within tolerance: 74.423758ms
	I1007 12:21:43.664209  407433 start.go:83] releasing machines lock for "ha-628553", held for 19.56863138s
	I1007 12:21:43.664247  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.664531  407433 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:21:43.667342  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.667692  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.667713  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.667926  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.668513  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.668738  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:21:43.668823  407433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:21:43.668885  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.668992  407433 ssh_runner.go:195] Run: cat /version.json
	I1007 12:21:43.669019  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:21:43.671881  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.672069  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.672323  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.672347  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.672508  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:43.672540  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:43.672558  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.672775  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.672782  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:21:43.672993  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:21:43.673037  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.673154  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:21:43.673173  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:21:43.673313  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:21:43.773521  407433 ssh_runner.go:195] Run: systemctl --version
	I1007 12:21:43.779770  407433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:21:43.923129  407433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:21:43.929378  407433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:21:43.929478  407433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:21:43.947124  407433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:21:43.947158  407433 start.go:495] detecting cgroup driver to use...
	I1007 12:21:43.947250  407433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:21:43.968850  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:21:43.983128  407433 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:21:43.983187  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:21:43.998922  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:21:44.013767  407433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:21:44.131824  407433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:21:44.307748  407433 docker.go:233] disabling docker service ...
	I1007 12:21:44.307813  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:21:44.322761  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:21:44.336261  407433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:21:44.455668  407433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:21:44.573473  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:21:44.588200  407433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:21:44.609001  407433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:21:44.609108  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.620005  407433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:21:44.620097  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.631644  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.642816  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.654321  407433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:21:44.665685  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.676944  407433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.695174  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:21:44.706235  407433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:21:44.716588  407433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:21:44.716660  407433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:21:44.730452  407433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:21:44.740676  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:21:44.871591  407433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:21:44.976983  407433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:21:44.977064  407433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:21:44.982348  407433 start.go:563] Will wait 60s for crictl version
	I1007 12:21:44.982414  407433 ssh_runner.go:195] Run: which crictl
	I1007 12:21:44.986177  407433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:21:45.026688  407433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:21:45.026772  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:21:45.056385  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:21:45.089059  407433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:21:45.090356  407433 main.go:141] libmachine: (ha-628553) Calling .GetIP
	I1007 12:21:45.092940  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:45.093302  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:21:45.093327  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:21:45.093547  407433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:21:45.098195  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:21:45.112382  407433 kubeadm.go:883] updating cluster {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:21:45.112579  407433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:21:45.112630  407433 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:21:45.157388  407433 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 12:21:45.157470  407433 ssh_runner.go:195] Run: which lz4
	I1007 12:21:45.161737  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1007 12:21:45.161869  407433 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:21:45.166514  407433 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:21:45.166551  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 12:21:46.605371  407433 crio.go:462] duration metric: took 1.443545276s to copy over tarball
	I1007 12:21:46.605453  407433 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:21:48.644174  407433 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.038668789s)
	I1007 12:21:48.644223  407433 crio.go:469] duration metric: took 2.038822202s to extract the tarball
	I1007 12:21:48.644232  407433 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:21:48.681627  407433 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:21:48.729709  407433 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:21:48.729745  407433 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:21:48.729755  407433 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.31.1 crio true true} ...
	I1007 12:21:48.729876  407433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:21:48.729949  407433 ssh_runner.go:195] Run: crio config
	I1007 12:21:48.777864  407433 cni.go:84] Creating CNI manager for ""
	I1007 12:21:48.777889  407433 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1007 12:21:48.777900  407433 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:21:48.777927  407433 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-628553 NodeName:ha-628553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:21:48.778139  407433 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-628553"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:21:48.778167  407433 kube-vip.go:115] generating kube-vip config ...
	I1007 12:21:48.778226  407433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:21:48.794550  407433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:21:48.794658  407433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:21:48.794711  407433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:21:48.804548  407433 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:21:48.804616  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1007 12:21:48.814049  407433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1007 12:21:48.830950  407433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:21:48.847474  407433 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1007 12:21:48.864374  407433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:21:48.881516  407433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:21:48.885417  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:21:48.897733  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:21:49.015861  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:21:49.033974  407433 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.110
	I1007 12:21:49.033999  407433 certs.go:194] generating shared ca certs ...
	I1007 12:21:49.034021  407433 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.034242  407433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:21:49.034299  407433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:21:49.034315  407433 certs.go:256] generating profile certs ...
	I1007 12:21:49.034456  407433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:21:49.034493  407433 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.eb58c645
	I1007 12:21:49.034513  407433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.eb58c645 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.149 192.168.39.254]
	I1007 12:21:49.325201  407433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.eb58c645 ...
	I1007 12:21:49.325236  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.eb58c645: {Name:mk52b692a291609d28023b2e669acc8c5036935a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.325440  407433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.eb58c645 ...
	I1007 12:21:49.325458  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.eb58c645: {Name:mk459cf1eb91311870c17fc9cbea0da8e2941bb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.325562  407433 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.eb58c645 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:21:49.325744  407433 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.eb58c645 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:21:49.325888  407433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:21:49.325906  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:21:49.325919  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:21:49.325930  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:21:49.325941  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:21:49.325954  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:21:49.325974  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:21:49.325985  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:21:49.325997  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:21:49.326051  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:21:49.326085  407433 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:21:49.326095  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:21:49.326116  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:21:49.326137  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:21:49.326159  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:21:49.326194  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:21:49.326227  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:21:49.326242  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:21:49.326254  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:21:49.326903  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:21:49.358627  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:21:49.384872  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:21:49.410247  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:21:49.449977  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1007 12:21:49.476006  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:21:49.502461  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:21:49.527930  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:21:49.554264  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:21:49.579161  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:21:49.604114  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:21:49.627763  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:21:49.646019  407433 ssh_runner.go:195] Run: openssl version
	I1007 12:21:49.652289  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:21:49.665149  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:21:49.670047  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:21:49.670122  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:21:49.676216  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:21:49.689578  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:21:49.702388  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:21:49.707134  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:21:49.707210  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:21:49.713294  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:21:49.726297  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:21:49.740242  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:21:49.745098  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:21:49.745158  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:21:49.751364  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:21:49.764372  407433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:21:49.769743  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:21:49.776032  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:21:49.782996  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:21:49.789785  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:21:49.796036  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:21:49.801890  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:21:49.807904  407433 kubeadm.go:392] StartCluster: {Name:ha-628553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clus
terName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:21:49.808044  407433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:21:49.808092  407433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:21:49.848170  407433 cri.go:89] found id: ""
	I1007 12:21:49.848266  407433 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:21:49.859183  407433 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 12:21:49.859210  407433 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 12:21:49.859307  407433 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 12:21:49.870460  407433 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 12:21:49.871075  407433 kubeconfig.go:125] found "ha-628553" server: "https://192.168.39.254:8443"
	I1007 12:21:49.871114  407433 kubeconfig.go:47] verify endpoint returned: got: 192.168.39.254:8443, want: 192.168.39.110:8443
	I1007 12:21:49.871485  407433 kubeconfig.go:62] /home/jenkins/minikube-integration/19763-377026/kubeconfig needs updating (will repair): [kubeconfig needs server address update]
	I1007 12:21:49.871770  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.872195  407433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:21:49.872502  407433 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.110:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:21:49.872973  407433 cert_rotation.go:140] Starting client certificate rotation controller
	I1007 12:21:49.873226  407433 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 12:21:49.883822  407433 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I1007 12:21:49.883853  407433 kubeadm.go:597] duration metric: took 24.635347ms to restartPrimaryControlPlane
	I1007 12:21:49.883865  407433 kubeadm.go:394] duration metric: took 75.972126ms to StartCluster
	I1007 12:21:49.883888  407433 settings.go:142] acquiring lock: {Name:mk1ff033f29b570679652ae5ee30e0799b0658dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.883981  407433 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:21:49.884584  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:21:49.884832  407433 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:21:49.884857  407433 start.go:241] waiting for startup goroutines ...
	I1007 12:21:49.884866  407433 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:21:49.885049  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:21:49.887198  407433 out.go:177] * Enabled addons: 
	I1007 12:21:49.888602  407433 addons.go:510] duration metric: took 3.73375ms for enable addons: enabled=[]
	I1007 12:21:49.888640  407433 start.go:246] waiting for cluster config update ...
	I1007 12:21:49.888652  407433 start.go:255] writing updated cluster config ...
	I1007 12:21:49.890380  407433 out.go:201] 
	I1007 12:21:49.892044  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:21:49.892193  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:21:49.893863  407433 out.go:177] * Starting "ha-628553-m02" control-plane node in "ha-628553" cluster
	I1007 12:21:49.895054  407433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:21:49.895079  407433 cache.go:56] Caching tarball of preloaded images
	I1007 12:21:49.895179  407433 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:21:49.895193  407433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:21:49.895302  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:21:49.895485  407433 start.go:360] acquireMachinesLock for ha-628553-m02: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:21:49.895560  407433 start.go:364] duration metric: took 38.924µs to acquireMachinesLock for "ha-628553-m02"
	I1007 12:21:49.895582  407433 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:21:49.895589  407433 fix.go:54] fixHost starting: m02
	I1007 12:21:49.895875  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:21:49.895903  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:21:49.911264  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I1007 12:21:49.911760  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:21:49.912290  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:21:49.912312  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:21:49.912642  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:21:49.912822  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:21:49.912974  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetState
	I1007 12:21:49.914562  407433 fix.go:112] recreateIfNeeded on ha-628553-m02: state=Stopped err=<nil>
	I1007 12:21:49.914585  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	W1007 12:21:49.914751  407433 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:21:49.916531  407433 out.go:177] * Restarting existing kvm2 VM for "ha-628553-m02" ...
	I1007 12:21:49.917652  407433 main.go:141] libmachine: (ha-628553-m02) Calling .Start
	I1007 12:21:49.917805  407433 main.go:141] libmachine: (ha-628553-m02) Ensuring networks are active...
	I1007 12:21:49.918580  407433 main.go:141] libmachine: (ha-628553-m02) Ensuring network default is active
	I1007 12:21:49.918949  407433 main.go:141] libmachine: (ha-628553-m02) Ensuring network mk-ha-628553 is active
	I1007 12:21:49.919375  407433 main.go:141] libmachine: (ha-628553-m02) Getting domain xml...
	I1007 12:21:49.920033  407433 main.go:141] libmachine: (ha-628553-m02) Creating domain...
	I1007 12:21:51.235971  407433 main.go:141] libmachine: (ha-628553-m02) Waiting to get IP...
	I1007 12:21:51.237030  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:51.237526  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:51.237632  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:51.237515  409044 retry.go:31] will retry after 208.460483ms: waiting for machine to come up
	I1007 12:21:51.448245  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:51.448690  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:51.448739  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:51.448638  409044 retry.go:31] will retry after 314.033838ms: waiting for machine to come up
	I1007 12:21:51.764102  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:51.764559  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:51.764592  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:51.764510  409044 retry.go:31] will retry after 314.49319ms: waiting for machine to come up
	I1007 12:21:52.081111  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:52.081669  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:52.081702  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:52.081620  409044 retry.go:31] will retry after 607.201266ms: waiting for machine to come up
	I1007 12:21:52.690434  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:52.690884  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:52.690914  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:52.690843  409044 retry.go:31] will retry after 566.633148ms: waiting for machine to come up
	I1007 12:21:53.258616  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:53.259044  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:53.259067  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:53.259013  409044 retry.go:31] will retry after 586.73854ms: waiting for machine to come up
	I1007 12:21:53.847808  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:53.848191  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:53.848219  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:53.848137  409044 retry.go:31] will retry after 735.539748ms: waiting for machine to come up
	I1007 12:21:54.585005  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:54.585437  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:54.585466  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:54.585387  409044 retry.go:31] will retry after 1.240571246s: waiting for machine to come up
	I1007 12:21:55.827051  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:55.827539  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:55.827568  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:55.827489  409044 retry.go:31] will retry after 1.305114745s: waiting for machine to come up
	I1007 12:21:57.133879  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:57.134360  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:57.134385  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:57.134302  409044 retry.go:31] will retry after 1.972744404s: waiting for machine to come up
	I1007 12:21:59.109841  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:21:59.110349  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:21:59.110386  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:21:59.110283  409044 retry.go:31] will retry after 2.038392713s: waiting for machine to come up
	I1007 12:22:01.151126  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:01.151707  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:22:01.151742  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:22:01.151646  409044 retry.go:31] will retry after 2.812494777s: waiting for machine to come up
	I1007 12:22:03.967985  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:03.968480  407433 main.go:141] libmachine: (ha-628553-m02) DBG | unable to find current IP address of domain ha-628553-m02 in network mk-ha-628553
	I1007 12:22:03.968513  407433 main.go:141] libmachine: (ha-628553-m02) DBG | I1007 12:22:03.968409  409044 retry.go:31] will retry after 4.415302249s: waiting for machine to come up
	I1007 12:22:08.387856  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.388271  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has current primary IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.388292  407433 main.go:141] libmachine: (ha-628553-m02) Found IP for machine: 192.168.39.169
	I1007 12:22:08.388304  407433 main.go:141] libmachine: (ha-628553-m02) Reserving static IP address...
	I1007 12:22:08.388775  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "ha-628553-m02", mac: "52:54:00:59:4a:2e", ip: "192.168.39.169"} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.388803  407433 main.go:141] libmachine: (ha-628553-m02) Reserved static IP address: 192.168.39.169
	I1007 12:22:08.388822  407433 main.go:141] libmachine: (ha-628553-m02) DBG | skip adding static IP to network mk-ha-628553 - found existing host DHCP lease matching {name: "ha-628553-m02", mac: "52:54:00:59:4a:2e", ip: "192.168.39.169"}
	I1007 12:22:08.388840  407433 main.go:141] libmachine: (ha-628553-m02) DBG | Getting to WaitForSSH function...
	I1007 12:22:08.388851  407433 main.go:141] libmachine: (ha-628553-m02) Waiting for SSH to be available...
	I1007 12:22:08.391251  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.391741  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.391772  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.391911  407433 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH client type: external
	I1007 12:22:08.391956  407433 main.go:141] libmachine: (ha-628553-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa (-rw-------)
	I1007 12:22:08.391990  407433 main.go:141] libmachine: (ha-628553-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.169 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:22:08.392006  407433 main.go:141] libmachine: (ha-628553-m02) DBG | About to run SSH command:
	I1007 12:22:08.392017  407433 main.go:141] libmachine: (ha-628553-m02) DBG | exit 0
	I1007 12:22:08.519218  407433 main.go:141] libmachine: (ha-628553-m02) DBG | SSH cmd err, output: <nil>: 
	I1007 12:22:08.519627  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetConfigRaw
	I1007 12:22:08.520267  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:22:08.523654  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.524166  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.524196  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.524529  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:22:08.524763  407433 machine.go:93] provisionDockerMachine start ...
	I1007 12:22:08.524782  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:08.525039  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:08.527442  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.527883  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.527913  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.528056  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:08.528266  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.528420  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.528566  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:08.528726  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:22:08.528904  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:22:08.528914  407433 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:22:08.635508  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:22:08.635537  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:22:08.635764  407433 buildroot.go:166] provisioning hostname "ha-628553-m02"
	I1007 12:22:08.635794  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:22:08.635985  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:08.638435  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.638821  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.638858  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.639006  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:08.639220  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.639380  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.639600  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:08.639843  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:22:08.640069  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:22:08.640085  407433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m02 && echo "ha-628553-m02" | sudo tee /etc/hostname
	I1007 12:22:08.760610  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m02
	
	I1007 12:22:08.760648  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:08.763799  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.764196  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.764235  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.764430  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:08.764649  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.764831  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.764927  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:08.765087  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:22:08.765280  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:22:08.765295  407433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:22:08.881564  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:22:08.881622  407433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:22:08.881648  407433 buildroot.go:174] setting up certificates
	I1007 12:22:08.881664  407433 provision.go:84] configureAuth start
	I1007 12:22:08.881683  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetMachineName
	I1007 12:22:08.882018  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:22:08.884802  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.885191  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.885210  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.885458  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:08.887773  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.888162  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.888194  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.888325  407433 provision.go:143] copyHostCerts
	I1007 12:22:08.888363  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:22:08.888416  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:22:08.888425  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:22:08.888483  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:22:08.888569  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:22:08.888587  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:22:08.888597  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:22:08.888619  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:22:08.888671  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:22:08.888688  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:22:08.888694  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:22:08.888710  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:22:08.888771  407433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m02 san=[127.0.0.1 192.168.39.169 ha-628553-m02 localhost minikube]
	I1007 12:22:08.990424  407433 provision.go:177] copyRemoteCerts
	I1007 12:22:08.990490  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:22:08.990518  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:08.993619  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.994005  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:08.994040  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:08.994292  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:08.994527  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:08.994745  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:08.994894  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:22:09.077614  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:22:09.077727  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:22:09.103137  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:22:09.103230  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:22:09.129200  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:22:09.129295  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:22:09.154191  407433 provision.go:87] duration metric: took 272.509247ms to configureAuth
	I1007 12:22:09.154229  407433 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:22:09.154471  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:22:09.154586  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:09.157664  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.158116  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.158150  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.158338  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.158597  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.158797  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.158995  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.159186  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:22:09.159390  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:22:09.159411  407433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:22:09.381237  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:22:09.381276  407433 machine.go:96] duration metric: took 856.499638ms to provisionDockerMachine
	I1007 12:22:09.381296  407433 start.go:293] postStartSetup for "ha-628553-m02" (driver="kvm2")
	I1007 12:22:09.381312  407433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:22:09.381350  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.381697  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:22:09.381736  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:09.384327  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.384689  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.384719  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.384871  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.385068  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.385208  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.385332  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:22:09.469861  407433 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:22:09.474347  407433 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:22:09.474371  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:22:09.474447  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:22:09.474534  407433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:22:09.474548  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:22:09.474660  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:22:09.484037  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:22:09.510005  407433 start.go:296] duration metric: took 128.687734ms for postStartSetup
	I1007 12:22:09.510070  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.510493  407433 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 12:22:09.510523  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:09.513232  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.513602  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.513626  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.513760  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.513960  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.514147  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.514331  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:22:09.597780  407433 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1007 12:22:09.597864  407433 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1007 12:22:09.658410  407433 fix.go:56] duration metric: took 19.762812976s for fixHost
	I1007 12:22:09.658470  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:09.661500  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.661951  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.661983  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.662211  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.662450  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.662639  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.662813  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.662999  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:22:09.663221  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.169 22 <nil> <nil>}
	I1007 12:22:09.663232  407433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:22:09.776043  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728303729.747670422
	
	I1007 12:22:09.776070  407433 fix.go:216] guest clock: 1728303729.747670422
	I1007 12:22:09.776080  407433 fix.go:229] Guest: 2024-10-07 12:22:09.747670422 +0000 UTC Remote: 2024-10-07 12:22:09.658444939 +0000 UTC m=+343.575880939 (delta=89.225483ms)
	I1007 12:22:09.776103  407433 fix.go:200] guest clock delta is within tolerance: 89.225483ms
	I1007 12:22:09.776111  407433 start.go:83] releasing machines lock for "ha-628553-m02", held for 19.880537818s
	I1007 12:22:09.776138  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.776434  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:22:09.779169  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.779579  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.779606  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.782206  407433 out.go:177] * Found network options:
	I1007 12:22:09.783789  407433 out.go:177]   - NO_PROXY=192.168.39.110
	W1007 12:22:09.785051  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:22:09.785086  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.785678  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.785903  407433 main.go:141] libmachine: (ha-628553-m02) Calling .DriverName
	I1007 12:22:09.786013  407433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:22:09.786055  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	W1007 12:22:09.786140  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:22:09.786221  407433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:22:09.786242  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHHostname
	I1007 12:22:09.788838  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.788959  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.789279  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.789313  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:09.789336  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.789375  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:09.789481  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.789646  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.789745  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHPort
	I1007 12:22:09.789827  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.789895  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHKeyPath
	I1007 12:22:09.789957  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:22:09.790016  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetSSHUsername
	I1007 12:22:09.790109  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.169 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m02/id_rsa Username:docker}
	I1007 12:22:10.010031  407433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:22:10.017030  407433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:22:10.017123  407433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:22:10.033689  407433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:22:10.033718  407433 start.go:495] detecting cgroup driver to use...
	I1007 12:22:10.033781  407433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:22:10.054449  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:22:10.069446  407433 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:22:10.069527  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:22:10.083996  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:22:10.098610  407433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:22:10.219232  407433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:22:10.373843  407433 docker.go:233] disabling docker service ...
	I1007 12:22:10.373933  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:22:10.388851  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:22:10.403086  407433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:22:10.540209  407433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:22:10.675669  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:22:10.690384  407433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:22:10.709546  407433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:22:10.709623  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.720116  407433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:22:10.720190  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.730739  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.741524  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.752457  407433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:22:10.764013  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.775511  407433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.794371  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:22:10.805510  407433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:22:10.815537  407433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:22:10.815603  407433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:22:10.831057  407433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:22:10.841520  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:22:10.968877  407433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:22:11.075263  407433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:22:11.075357  407433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:22:11.081175  407433 start.go:563] Will wait 60s for crictl version
	I1007 12:22:11.081242  407433 ssh_runner.go:195] Run: which crictl
	I1007 12:22:11.085171  407433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:22:11.133160  407433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:22:11.133271  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:22:11.164197  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:22:11.194713  407433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:22:11.196334  407433 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:22:11.197764  407433 main.go:141] libmachine: (ha-628553-m02) Calling .GetIP
	I1007 12:22:11.200441  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:11.200851  407433 main.go:141] libmachine: (ha-628553-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:4a:2e", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:22:01 +0000 UTC Type:0 Mac:52:54:00:59:4a:2e Iaid: IPaddr:192.168.39.169 Prefix:24 Hostname:ha-628553-m02 Clientid:01:52:54:00:59:4a:2e}
	I1007 12:22:11.200877  407433 main.go:141] libmachine: (ha-628553-m02) DBG | domain ha-628553-m02 has defined IP address 192.168.39.169 and MAC address 52:54:00:59:4a:2e in network mk-ha-628553
	I1007 12:22:11.201089  407433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:22:11.205514  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:22:11.218658  407433 mustload.go:65] Loading cluster: ha-628553
	I1007 12:22:11.218947  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:22:11.219414  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:22:11.219475  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:22:11.235738  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45321
	I1007 12:22:11.236267  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:22:11.236782  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:22:11.236806  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:22:11.237191  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:22:11.237368  407433 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:22:11.238911  407433 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:22:11.239272  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:22:11.239313  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:22:11.255263  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43133
	I1007 12:22:11.255795  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:22:11.256322  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:22:11.256339  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:22:11.256727  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:22:11.256985  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:22:11.257164  407433 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.169
	I1007 12:22:11.257178  407433 certs.go:194] generating shared ca certs ...
	I1007 12:22:11.257195  407433 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:22:11.257355  407433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:22:11.257399  407433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:22:11.257413  407433 certs.go:256] generating profile certs ...
	I1007 12:22:11.257495  407433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:22:11.257524  407433 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.043910f8
	I1007 12:22:11.257542  407433 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.043910f8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.110 192.168.39.169 192.168.39.149 192.168.39.254]
	I1007 12:22:11.376262  407433 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.043910f8 ...
	I1007 12:22:11.376304  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.043910f8: {Name:mkad116b0a0bd32720c3eed0fa14324438815f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:22:11.376541  407433 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.043910f8 ...
	I1007 12:22:11.376562  407433 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.043910f8: {Name:mk2a2a5bce258a22a7eaf81de7b6217966a2d787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:22:11.376684  407433 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt.043910f8 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt
	I1007 12:22:11.376854  407433 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.043910f8 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key
	I1007 12:22:11.377018  407433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:22:11.377044  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:22:11.377059  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:22:11.377076  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:22:11.377092  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:22:11.377110  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:22:11.377125  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:22:11.377140  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:22:11.377155  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:22:11.377229  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:22:11.377263  407433 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:22:11.377275  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:22:11.377300  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:22:11.377328  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:22:11.377352  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:22:11.377397  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:22:11.377426  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:22:11.377443  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:22:11.377461  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:22:11.377501  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:22:11.381059  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:22:11.381601  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:22:11.381633  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:22:11.381836  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:22:11.382043  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:22:11.382222  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:22:11.382390  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:22:11.455503  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:22:11.461136  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:22:11.473653  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:22:11.478135  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:22:11.489946  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:22:11.494843  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:22:11.506268  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:22:11.510909  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:22:11.522605  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:22:11.527274  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:22:11.538497  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:22:11.543624  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:22:11.555363  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:22:11.583578  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:22:11.611552  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:22:11.636924  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:22:11.661790  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:22:11.685935  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:22:11.711710  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:22:11.737160  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:22:11.762304  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:22:11.786539  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:22:11.810685  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:22:11.833875  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:22:11.851451  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:22:11.868954  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:22:11.886423  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:22:11.905393  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:22:11.923875  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:22:11.941676  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:22:11.959844  407433 ssh_runner.go:195] Run: openssl version
	I1007 12:22:11.966344  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:22:11.978502  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:22:11.983776  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:22:11.983853  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:22:11.990297  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:22:12.002305  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:22:12.014110  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:22:12.019086  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:22:12.019159  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:22:12.025335  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:22:12.036758  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:22:12.048355  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:22:12.053412  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:22:12.053492  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:22:12.059272  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:22:12.070758  407433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:22:12.075791  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:22:12.082720  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:22:12.089301  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:22:12.096107  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:22:12.102517  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:22:12.109098  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:22:12.115468  407433 kubeadm.go:934] updating node {m02 192.168.39.169 8443 v1.31.1 crio true true} ...
	I1007 12:22:12.115576  407433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.169
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:22:12.115603  407433 kube-vip.go:115] generating kube-vip config ...
	I1007 12:22:12.115644  407433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:22:12.132912  407433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:22:12.132991  407433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:22:12.133043  407433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:22:12.145165  407433 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:22:12.145248  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:22:12.156404  407433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:22:12.175104  407433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:22:12.193469  407433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:22:12.212159  407433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:22:12.216422  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:22:12.231433  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:22:12.363053  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:22:12.381011  407433 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.169 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:22:12.381343  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:22:12.383401  407433 out.go:177] * Verifying Kubernetes components...
	I1007 12:22:12.384773  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:22:12.532916  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:22:12.553180  407433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:22:12.553550  407433 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:22:12.553656  407433 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:22:12.553972  407433 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:22:12.554196  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:12.554220  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:12.554232  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:12.554239  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:18.301031  407433 round_trippers.go:574] Response Status:  in 5746 milliseconds
	I1007 12:22:19.302022  407433 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:19.302092  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:19.302103  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:19.302116  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:19.302126  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:25.882940  407433 round_trippers.go:574] Response Status: 200 OK in 6580 milliseconds
	I1007 12:22:25.884692  407433 node_ready.go:53] node "ha-628553-m02" has status "Ready":"Unknown"
	I1007 12:22:25.884831  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:25.884847  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:25.884860  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:25.884935  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:25.982595  407433 round_trippers.go:574] Response Status: 200 OK in 97 milliseconds
	I1007 12:22:26.054899  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:26.054921  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:26.054930  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:26.054933  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:26.059560  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:26.555014  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:26.555045  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:26.555057  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:26.555065  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:26.559206  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:27.054792  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:27.054816  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:27.054824  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:27.054827  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:27.061739  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:22:27.555292  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:27.555325  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:27.555337  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:27.555347  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:27.563215  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:28.055264  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:28.055291  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.055301  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.055304  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.059542  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:28.060267  407433 node_ready.go:49] node "ha-628553-m02" has status "Ready":"True"
	I1007 12:22:28.060290  407433 node_ready.go:38] duration metric: took 15.506272507s for node "ha-628553-m02" to be "Ready" ...
	I1007 12:22:28.060301  407433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:22:28.060373  407433 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:22:28.060386  407433 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:22:28.060447  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:22:28.060454  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.060462  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.060469  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.077200  407433 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1007 12:22:28.090257  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.090388  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:22:28.090399  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.090410  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.090415  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.123364  407433 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I1007 12:22:28.124275  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:28.124299  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.124310  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.124318  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.132653  407433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:22:28.133675  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:28.133696  407433 pod_ready.go:82] duration metric: took 43.399563ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.133706  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.133777  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:22:28.133784  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.133792  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.133796  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.139346  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:28.140092  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:28.140116  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.140128  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.140134  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.149768  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:22:28.150490  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:28.150518  407433 pod_ready.go:82] duration metric: took 16.804436ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.150534  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.150635  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:22:28.150648  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.150659  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.150665  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.167054  407433 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1007 12:22:28.168537  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:28.168564  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.168576  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.168597  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.196276  407433 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1007 12:22:28.196944  407433 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:28.196972  407433 pod_ready.go:82] duration metric: took 46.431838ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.196983  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:28.197072  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:28.197086  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.197095  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.197098  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.206052  407433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:22:28.206699  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:28.206720  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.206730  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.206735  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.214511  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:28.697370  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:28.697406  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.697424  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.697428  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.842951  407433 round_trippers.go:574] Response Status: 200 OK in 145 milliseconds
	I1007 12:22:28.845278  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:28.845303  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:28.845315  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:28.845322  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:28.854353  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:22:29.198165  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:29.198189  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:29.198198  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:29.198201  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:29.203175  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:29.205182  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:29.205209  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:29.205218  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:29.205223  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:29.210652  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:29.697669  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:29.697700  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:29.697713  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:29.697732  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:29.705615  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:29.706469  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:29.706492  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:29.706504  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:29.706511  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:29.715103  407433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:22:30.197960  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:30.197991  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:30.198004  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:30.198010  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:30.203140  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:30.204062  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:30.204080  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:30.204089  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:30.204096  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:30.222521  407433 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1007 12:22:30.223086  407433 pod_ready.go:103] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:30.697299  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:30.697334  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:30.697344  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:30.697347  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:30.701257  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:30.702061  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:30.702082  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:30.702094  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:30.702102  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:30.706938  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:31.198212  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:31.198243  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:31.198267  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:31.198274  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:31.203993  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:31.205091  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:31.205109  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:31.205118  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:31.205122  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:31.209691  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:31.697401  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:31.697433  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:31.697444  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:31.697451  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:31.701199  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:31.702210  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:31.702233  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:31.702246  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:31.702251  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:31.705418  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:32.198058  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:32.198116  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:32.198130  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:32.198136  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:32.203674  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:32.204616  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:32.204641  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:32.204653  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:32.204657  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:32.208476  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:32.697495  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:32.697522  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:32.697532  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:32.697538  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:32.702340  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:32.703196  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:32.703221  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:32.703235  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:32.703241  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:32.705999  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:32.706524  407433 pod_ready.go:103] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:33.198037  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:33.198069  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:33.198080  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:33.198084  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:33.203780  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:33.204636  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:33.204657  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:33.204669  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:33.204675  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:33.208214  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:33.697605  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:33.697632  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:33.697645  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:33.697650  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:33.701142  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:33.702088  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:33.702118  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:33.702131  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:33.702138  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:33.705122  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:34.197580  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:34.197604  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:34.197613  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:34.197619  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:34.201483  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:34.202164  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:34.202184  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:34.202195  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:34.202199  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:34.206539  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:34.697531  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:34.697559  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:34.697572  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:34.697580  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:34.702160  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:34.702943  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:34.702974  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:34.702987  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:34.702996  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:34.707938  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:34.708417  407433 pod_ready.go:103] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:35.197280  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:35.197306  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:35.197317  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:35.197322  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:35.200415  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:35.201519  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:35.201541  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:35.201553  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:35.201558  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:35.206937  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:35.697771  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:35.697814  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:35.697822  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:35.697826  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:35.700986  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:35.701912  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:35.701931  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:35.701942  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:35.701948  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:35.705346  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:36.197982  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:36.198006  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.198016  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.198020  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.201971  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:36.202642  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:36.202662  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.202672  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.202678  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.209784  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:36.697343  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:22:36.697370  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.697382  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.697389  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.707635  407433 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1007 12:22:36.708226  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:36.708244  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.708252  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.708256  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.717789  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:22:36.718189  407433 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:36.718208  407433 pod_ready.go:82] duration metric: took 8.521218499s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.718219  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.718296  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:22:36.718304  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.718312  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.718317  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.729522  407433 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:22:36.730246  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:36.730268  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.730279  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.730285  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.739861  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:22:36.740433  407433 pod_ready.go:93] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:36.740455  407433 pod_ready.go:82] duration metric: took 22.228703ms for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.740488  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.740589  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:22:36.740601  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.740612  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.740618  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.745235  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:36.746010  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:36.746027  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.746038  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.746044  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.755362  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:22:36.755982  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:36.756004  407433 pod_ready.go:82] duration metric: took 15.502576ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.756018  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:36.756088  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:36.756098  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.756109  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.756119  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.762638  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:22:36.763304  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:36.763320  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:36.763332  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:36.763338  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:36.769260  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:37.257047  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:37.257073  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:37.257082  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:37.257088  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:37.261258  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:37.262344  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:37.262362  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:37.262374  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:37.262381  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:37.265528  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:37.756316  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:37.756348  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:37.756357  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:37.756380  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:37.759959  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:37.760592  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:37.760608  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:37.760619  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:37.760626  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:37.763881  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:38.256736  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:38.256764  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:38.256772  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:38.256776  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:38.260822  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:38.261748  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:38.261773  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:38.261784  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:38.261790  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:38.264792  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:38.756573  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:38.756600  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:38.756608  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:38.756613  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:38.760580  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:38.761226  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:38.761243  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:38.761253  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:38.761258  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:38.764495  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:38.765069  407433 pod_ready.go:103] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:39.257548  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:39.257582  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:39.257596  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:39.257604  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:39.262371  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:39.263141  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:39.263157  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:39.263165  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:39.263168  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:39.265558  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:39.756414  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:39.756444  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:39.756453  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:39.756456  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:39.760294  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:39.761197  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:39.761224  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:39.761237  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:39.761244  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:39.764637  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:40.256227  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:40.256262  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:40.256270  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:40.256275  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:40.259556  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:40.260322  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:40.260342  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:40.260351  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:40.260355  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:40.263371  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:40.756372  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:40.756396  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:40.756403  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:40.756408  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:40.771335  407433 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1007 12:22:40.772006  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:40.772022  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:40.772030  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:40.772033  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:40.777535  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:40.777965  407433 pod_ready.go:103] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:41.256735  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:41.256767  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:41.256780  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:41.256788  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:41.273883  407433 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1007 12:22:41.280920  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:41.280948  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:41.280960  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:41.280967  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:41.292828  407433 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1007 12:22:41.757198  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:41.757222  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:41.757231  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:41.757236  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:41.770337  407433 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1007 12:22:41.771472  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:41.771494  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:41.771506  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:41.771514  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:41.779489  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:42.257216  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:42.257243  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.257252  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.257262  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.261915  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:42.262739  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:42.262763  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.262774  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.262781  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.266222  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:42.757106  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:22:42.757127  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.757137  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.757142  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.762377  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:42.763101  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:42.763119  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.763131  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.763136  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.770630  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:22:42.771096  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:42.771122  407433 pod_ready.go:82] duration metric: took 6.015095455s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.771133  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.771216  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:22:42.771226  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.771237  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.771244  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.778181  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:22:42.779565  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:42.779581  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.779591  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.779603  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.782316  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.782811  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:42.782829  407433 pod_ready.go:82] duration metric: took 11.687925ms for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.782843  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.782911  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:22:42.782920  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.782930  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.782937  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.785570  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.786923  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:42.786941  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.786952  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.786975  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.789441  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.789946  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:42.789965  407433 pod_ready.go:82] duration metric: took 7.11467ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.789979  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.790058  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:22:42.790069  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.790079  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.790088  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.792899  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.793562  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:42.793576  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.793584  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.793588  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.796927  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:42.797476  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:42.797498  407433 pod_ready.go:82] duration metric: took 7.503676ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.797511  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.797567  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:22:42.797574  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.797581  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.797587  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.800148  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.800742  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:42.800754  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.800762  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.800765  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.803727  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:42.804448  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:42.804467  407433 pod_ready.go:82] duration metric: took 6.948065ms for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.804481  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:42.957963  407433 request.go:632] Waited for 153.368794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:22:42.958055  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:22:42.958060  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:42.958069  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:42.958078  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:42.961567  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.157800  407433 request.go:632] Waited for 195.378331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:43.157878  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:43.157892  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:43.157903  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:43.157914  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:43.161601  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.162219  407433 pod_ready.go:93] pod "kube-proxy-956k4" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:43.162242  407433 pod_ready.go:82] duration metric: took 357.752321ms for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:43.162255  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:43.357413  407433 request.go:632] Waited for 195.066421ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:22:43.357483  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:22:43.357488  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:43.357495  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:43.357499  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:43.361323  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.557269  407433 request.go:632] Waited for 195.302674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:22:43.557343  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:22:43.557353  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:43.557363  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:43.557371  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:43.561306  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.561756  407433 pod_ready.go:93] pod "kube-proxy-fkzqr" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:43.561775  407433 pod_ready.go:82] duration metric: took 399.513689ms for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:43.561786  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:43.757890  407433 request.go:632] Waited for 195.995772ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:22:43.757962  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:22:43.757967  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:43.757976  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:43.757982  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:43.761766  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.957983  407433 request.go:632] Waited for 195.42551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:43.958056  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:43.958062  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:43.958072  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:43.958078  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:43.961672  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:43.962521  407433 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:43.962544  407433 pod_ready.go:82] duration metric: took 400.749029ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:43.962557  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:44.157168  407433 request.go:632] Waited for 194.496229ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:22:44.157243  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:22:44.157249  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:44.157257  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:44.157261  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:44.161439  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:44.357491  407433 request.go:632] Waited for 195.406494ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:44.357559  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:44.357564  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:44.357572  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:44.357576  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:44.361412  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:44.361938  407433 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:44.361961  407433 pod_ready.go:82] duration metric: took 399.39545ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:44.361973  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:44.557153  407433 request.go:632] Waited for 195.068437ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:22:44.557219  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:22:44.557225  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:44.557232  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:44.557238  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:44.561658  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:44.757919  407433 request.go:632] Waited for 195.424954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:44.757989  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:22:44.757995  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:44.758002  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:44.758006  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:44.762165  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:44.763001  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:44.763028  407433 pod_ready.go:82] duration metric: took 401.047381ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:44.763043  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:44.957578  407433 request.go:632] Waited for 194.437629ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:44.957639  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:44.957645  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:44.957653  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:44.957658  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:44.961740  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:45.157777  407433 request.go:632] Waited for 195.385409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.157877  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.157886  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:45.157896  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:45.157904  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:45.161679  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:45.357862  407433 request.go:632] Waited for 94.300622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:45.357935  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:45.357942  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:45.357954  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:45.357962  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:45.361543  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:45.557595  407433 request.go:632] Waited for 195.417854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.557668  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.557676  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:45.557688  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:45.557696  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:45.561494  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:45.763256  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:45.763291  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:45.763304  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:45.763309  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:45.767863  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:45.958122  407433 request.go:632] Waited for 189.42553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.958187  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:45.958192  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:45.958200  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:45.958204  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:45.961616  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:46.263956  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:46.263983  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:46.263995  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:46.264001  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:46.267522  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:46.357364  407433 request.go:632] Waited for 88.387695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:46.357421  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:46.357426  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:46.357434  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:46.357438  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:46.361662  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:46.764296  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:46.764325  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:46.764335  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:46.764341  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:46.770081  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:22:46.770843  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:46.770869  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:46.770883  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:46.770892  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:46.774337  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:46.775088  407433 pod_ready.go:103] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"False"
	I1007 12:22:47.263519  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:47.263548  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.263557  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.263562  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.267707  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:47.268340  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:47.268356  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.268365  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.268370  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.271526  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:47.764247  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:22:47.764274  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.764285  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.764289  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.767691  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:47.768263  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:22:47.768279  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.768287  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.768292  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.772194  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:47.772988  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:47.773010  407433 pod_ready.go:82] duration metric: took 3.009958286s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:47.773024  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:47.773091  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:22:47.773098  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.773107  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.773113  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.775884  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:22:47.957918  407433 request.go:632] Waited for 181.421489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:47.958003  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:22:47.958013  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.958025  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.958035  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.961770  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:47.962588  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:22:47.962607  407433 pod_ready.go:82] duration metric: took 189.574431ms for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:22:47.962618  407433 pod_ready.go:39] duration metric: took 19.902306936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:22:47.962636  407433 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:22:47.962704  407433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:22:47.981086  407433 api_server.go:72] duration metric: took 35.600013912s to wait for apiserver process to appear ...
	I1007 12:22:47.981128  407433 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:22:47.981157  407433 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:22:47.987797  407433 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:22:47.987879  407433 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:22:47.987885  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:47.987897  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:47.987904  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:47.988886  407433 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1007 12:22:47.989005  407433 api_server.go:141] control plane version: v1.31.1
	I1007 12:22:47.989025  407433 api_server.go:131] duration metric: took 7.889956ms to wait for apiserver health ...
	I1007 12:22:47.989046  407433 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:22:48.157497  407433 request.go:632] Waited for 168.357325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:22:48.157582  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:22:48.157590  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:48.157598  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:48.157606  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:48.170794  407433 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1007 12:22:48.178444  407433 system_pods.go:59] 26 kube-system pods found
	I1007 12:22:48.178489  407433 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:22:48.178500  407433 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:22:48.178507  407433 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:22:48.178511  407433 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:22:48.178515  407433 system_pods.go:61] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:22:48.178519  407433 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:22:48.178522  407433 system_pods.go:61] "kindnet-rwk2r" [8ec7b1f3-d6b5-4e44-8574-c197eb45bf28] Running
	I1007 12:22:48.178525  407433 system_pods.go:61] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:22:48.178529  407433 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:22:48.178533  407433 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:22:48.178538  407433 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:22:48.178543  407433 system_pods.go:61] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:22:48.178547  407433 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:22:48.178555  407433 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:22:48.178564  407433 system_pods.go:61] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:22:48.178570  407433 system_pods.go:61] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:22:48.178576  407433 system_pods.go:61] "kube-proxy-fkzqr" [16f7acfc-13b5-426d-9b0a-59a5131fc297] Running
	I1007 12:22:48.178581  407433 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:22:48.178586  407433 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:22:48.178602  407433 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:22:48.178610  407433 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:22:48.178613  407433 system_pods.go:61] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:22:48.178616  407433 system_pods.go:61] "kube-vip-ha-628553" [56148ec7-dffa-4dfc-8414-c9feb65b09d3] Running
	I1007 12:22:48.178619  407433 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:22:48.178622  407433 system_pods.go:61] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:22:48.178625  407433 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:22:48.178631  407433 system_pods.go:74] duration metric: took 189.575174ms to wait for pod list to return data ...
	I1007 12:22:48.178641  407433 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:22:48.358157  407433 request.go:632] Waited for 179.407704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:22:48.358239  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:22:48.358248  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:48.358260  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:48.358269  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:48.362697  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:22:48.363032  407433 default_sa.go:45] found service account: "default"
	I1007 12:22:48.363053  407433 default_sa.go:55] duration metric: took 184.404861ms for default service account to be created ...
	I1007 12:22:48.363066  407433 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:22:48.558136  407433 request.go:632] Waited for 194.970967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:22:48.558208  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:22:48.558217  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:48.558228  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:48.558234  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:48.569105  407433 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1007 12:22:48.578079  407433 system_pods.go:86] 26 kube-system pods found
	I1007 12:22:48.578116  407433 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:22:48.578125  407433 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1007 12:22:48.578132  407433 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:22:48.578136  407433 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:22:48.578140  407433 system_pods.go:89] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:22:48.578143  407433 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:22:48.578146  407433 system_pods.go:89] "kindnet-rwk2r" [8ec7b1f3-d6b5-4e44-8574-c197eb45bf28] Running
	I1007 12:22:48.578152  407433 system_pods.go:89] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:22:48.578156  407433 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:22:48.578162  407433 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:22:48.578167  407433 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:22:48.578172  407433 system_pods.go:89] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:22:48.578180  407433 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:22:48.578187  407433 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:22:48.578196  407433 system_pods.go:89] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:22:48.578202  407433 system_pods.go:89] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:22:48.578212  407433 system_pods.go:89] "kube-proxy-fkzqr" [16f7acfc-13b5-426d-9b0a-59a5131fc297] Running
	I1007 12:22:48.578218  407433 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:22:48.578223  407433 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:22:48.578230  407433 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:22:48.578236  407433 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:22:48.578244  407433 system_pods.go:89] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:22:48.578249  407433 system_pods.go:89] "kube-vip-ha-628553" [56148ec7-dffa-4dfc-8414-c9feb65b09d3] Running
	I1007 12:22:48.578257  407433 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:22:48.578262  407433 system_pods.go:89] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:22:48.578270  407433 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:22:48.578278  407433 system_pods.go:126] duration metric: took 215.203312ms to wait for k8s-apps to be running ...
	I1007 12:22:48.578288  407433 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:22:48.578337  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:22:48.595876  407433 system_svc.go:56] duration metric: took 17.573712ms WaitForService to wait for kubelet
	I1007 12:22:48.595992  407433 kubeadm.go:582] duration metric: took 36.214918629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:22:48.596027  407433 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:22:48.757411  407433 request.go:632] Waited for 161.243279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:22:48.757479  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:22:48.757486  407433 round_trippers.go:469] Request Headers:
	I1007 12:22:48.757498  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:22:48.757506  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:22:48.761378  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:22:48.762688  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:22:48.762714  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:22:48.762729  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:22:48.762735  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:22:48.762740  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:22:48.762744  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:22:48.762749  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:22:48.762754  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:22:48.762760  407433 node_conditions.go:105] duration metric: took 166.71889ms to run NodePressure ...
	I1007 12:22:48.762780  407433 start.go:241] waiting for startup goroutines ...
	I1007 12:22:48.762829  407433 start.go:255] writing updated cluster config ...
	I1007 12:22:48.764946  407433 out.go:201] 
	I1007 12:22:48.766449  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:22:48.766593  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:22:48.768265  407433 out.go:177] * Starting "ha-628553-m03" control-plane node in "ha-628553" cluster
	I1007 12:22:48.769400  407433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:22:48.769441  407433 cache.go:56] Caching tarball of preloaded images
	I1007 12:22:48.769551  407433 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:22:48.769561  407433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:22:48.769658  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:22:48.769852  407433 start.go:360] acquireMachinesLock for ha-628553-m03: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:22:48.769900  407433 start.go:364] duration metric: took 26.129µs to acquireMachinesLock for "ha-628553-m03"
	I1007 12:22:48.769918  407433 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:22:48.769923  407433 fix.go:54] fixHost starting: m03
	I1007 12:22:48.770262  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:22:48.770311  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:22:48.788039  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45227
	I1007 12:22:48.788576  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:22:48.789119  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:22:48.789141  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:22:48.789458  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:22:48.789653  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:22:48.789784  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetState
	I1007 12:22:48.791257  407433 fix.go:112] recreateIfNeeded on ha-628553-m03: state=Stopped err=<nil>
	I1007 12:22:48.791281  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	W1007 12:22:48.791436  407433 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:22:48.793229  407433 out.go:177] * Restarting existing kvm2 VM for "ha-628553-m03" ...
	I1007 12:22:48.794588  407433 main.go:141] libmachine: (ha-628553-m03) Calling .Start
	I1007 12:22:48.794790  407433 main.go:141] libmachine: (ha-628553-m03) Ensuring networks are active...
	I1007 12:22:48.795632  407433 main.go:141] libmachine: (ha-628553-m03) Ensuring network default is active
	I1007 12:22:48.796059  407433 main.go:141] libmachine: (ha-628553-m03) Ensuring network mk-ha-628553 is active
	I1007 12:22:48.796459  407433 main.go:141] libmachine: (ha-628553-m03) Getting domain xml...
	I1007 12:22:48.797244  407433 main.go:141] libmachine: (ha-628553-m03) Creating domain...
	I1007 12:22:50.059311  407433 main.go:141] libmachine: (ha-628553-m03) Waiting to get IP...
	I1007 12:22:50.060372  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:50.060879  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:50.060964  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:50.060852  409345 retry.go:31] will retry after 192.791787ms: waiting for machine to come up
	I1007 12:22:50.255484  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:50.256001  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:50.256027  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:50.255953  409345 retry.go:31] will retry after 253.611969ms: waiting for machine to come up
	I1007 12:22:50.511637  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:50.512045  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:50.512063  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:50.512005  409345 retry.go:31] will retry after 324.599473ms: waiting for machine to come up
	I1007 12:22:50.838737  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:50.839303  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:50.839327  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:50.839255  409345 retry.go:31] will retry after 528.387577ms: waiting for machine to come up
	I1007 12:22:51.368905  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:51.369291  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:51.369315  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:51.369243  409345 retry.go:31] will retry after 720.335589ms: waiting for machine to come up
	I1007 12:22:52.091215  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:52.091630  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:52.091650  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:52.091588  409345 retry.go:31] will retry after 812.339657ms: waiting for machine to come up
	I1007 12:22:52.905101  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:52.905638  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:52.905670  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:52.905581  409345 retry.go:31] will retry after 1.091749856s: waiting for machine to come up
	I1007 12:22:53.999247  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:53.999746  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:53.999771  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:53.999680  409345 retry.go:31] will retry after 1.129170214s: waiting for machine to come up
	I1007 12:22:55.130925  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:55.131502  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:55.131537  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:55.131443  409345 retry.go:31] will retry after 1.171260829s: waiting for machine to come up
	I1007 12:22:56.304318  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:56.304945  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:56.304976  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:56.304894  409345 retry.go:31] will retry after 2.157722162s: waiting for machine to come up
	I1007 12:22:58.464571  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:22:58.464987  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:22:58.465010  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:22:58.464945  409345 retry.go:31] will retry after 2.28932583s: waiting for machine to come up
	I1007 12:23:00.756368  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:00.756994  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:23:00.757021  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:23:00.756934  409345 retry.go:31] will retry after 2.519358741s: waiting for machine to come up
	I1007 12:23:03.277504  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:03.277859  407433 main.go:141] libmachine: (ha-628553-m03) DBG | unable to find current IP address of domain ha-628553-m03 in network mk-ha-628553
	I1007 12:23:03.277897  407433 main.go:141] libmachine: (ha-628553-m03) DBG | I1007 12:23:03.277846  409345 retry.go:31] will retry after 3.670860774s: waiting for machine to come up
	I1007 12:23:06.951953  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:06.952402  407433 main.go:141] libmachine: (ha-628553-m03) Found IP for machine: 192.168.39.149
	I1007 12:23:06.952443  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has current primary IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:06.952454  407433 main.go:141] libmachine: (ha-628553-m03) Reserving static IP address...
	I1007 12:23:06.952862  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "ha-628553-m03", mac: "52:54:00:3c:9f:34", ip: "192.168.39.149"} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:06.952897  407433 main.go:141] libmachine: (ha-628553-m03) DBG | skip adding static IP to network mk-ha-628553 - found existing host DHCP lease matching {name: "ha-628553-m03", mac: "52:54:00:3c:9f:34", ip: "192.168.39.149"}
	I1007 12:23:06.952906  407433 main.go:141] libmachine: (ha-628553-m03) Reserved static IP address: 192.168.39.149
	I1007 12:23:06.952914  407433 main.go:141] libmachine: (ha-628553-m03) Waiting for SSH to be available...
	I1007 12:23:06.952927  407433 main.go:141] libmachine: (ha-628553-m03) DBG | Getting to WaitForSSH function...
	I1007 12:23:06.955043  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:06.955351  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:06.955381  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:06.955448  407433 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH client type: external
	I1007 12:23:06.955503  407433 main.go:141] libmachine: (ha-628553-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa (-rw-------)
	I1007 12:23:06.955539  407433 main.go:141] libmachine: (ha-628553-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:23:06.955561  407433 main.go:141] libmachine: (ha-628553-m03) DBG | About to run SSH command:
	I1007 12:23:06.955572  407433 main.go:141] libmachine: (ha-628553-m03) DBG | exit 0
	I1007 12:23:07.079169  407433 main.go:141] libmachine: (ha-628553-m03) DBG | SSH cmd err, output: <nil>: 
	I1007 12:23:07.079565  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetConfigRaw
	I1007 12:23:07.080385  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:23:07.083418  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.083852  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.083879  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.084189  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:23:07.084545  407433 machine.go:93] provisionDockerMachine start ...
	I1007 12:23:07.084571  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:07.084826  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.087551  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.087978  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.088009  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.088182  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:07.088391  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.088547  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.088740  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:07.088923  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:23:07.089188  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:23:07.089206  407433 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:23:07.196059  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:23:07.196088  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:23:07.196335  407433 buildroot.go:166] provisioning hostname "ha-628553-m03"
	I1007 12:23:07.196347  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:23:07.196551  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.199203  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.199616  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.199644  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.199833  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:07.200016  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.200171  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.200290  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:07.200443  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:23:07.200715  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:23:07.200731  407433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m03 && echo "ha-628553-m03" | sudo tee /etc/hostname
	I1007 12:23:07.323544  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m03
	
	I1007 12:23:07.323582  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.326726  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.327122  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.327150  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.327368  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:07.327582  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.327771  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.327933  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:07.328149  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:23:07.328353  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:23:07.328376  407433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:23:07.450543  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:23:07.450579  407433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:23:07.450611  407433 buildroot.go:174] setting up certificates
	I1007 12:23:07.450626  407433 provision.go:84] configureAuth start
	I1007 12:23:07.450642  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetMachineName
	I1007 12:23:07.451018  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:23:07.454048  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.454630  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.454686  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.454833  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.457738  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.458176  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.458206  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.458383  407433 provision.go:143] copyHostCerts
	I1007 12:23:07.458422  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:23:07.458463  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:23:07.458473  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:23:07.458535  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:23:07.458607  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:23:07.458625  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:23:07.458631  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:23:07.458658  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:23:07.458702  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:23:07.458718  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:23:07.458724  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:23:07.458745  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:23:07.458791  407433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m03 san=[127.0.0.1 192.168.39.149 ha-628553-m03 localhost minikube]
	I1007 12:23:07.670948  407433 provision.go:177] copyRemoteCerts
	I1007 12:23:07.671039  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:23:07.671068  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.673765  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.674173  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.674201  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.674449  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:07.674674  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.674803  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:07.674918  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:23:07.758450  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:23:07.758534  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:23:07.784428  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:23:07.784519  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:23:07.810095  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:23:07.810186  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:23:07.836425  407433 provision.go:87] duration metric: took 385.779504ms to configureAuth
	I1007 12:23:07.836456  407433 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:23:07.836690  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:23:07.836767  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:07.839503  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.839941  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:07.839967  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:07.840189  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:07.840398  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.840560  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:07.840709  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:07.840928  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:23:07.841153  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:23:07.841174  407433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:23:08.085320  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:23:08.085371  407433 machine.go:96] duration metric: took 1.000808183s to provisionDockerMachine
	I1007 12:23:08.085390  407433 start.go:293] postStartSetup for "ha-628553-m03" (driver="kvm2")
	I1007 12:23:08.085410  407433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:23:08.085436  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.085777  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:23:08.085815  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:08.088687  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.089100  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.089153  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.089292  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:08.089520  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.089746  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:08.089915  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:23:08.175403  407433 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:23:08.180139  407433 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:23:08.180174  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:23:08.180280  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:23:08.180380  407433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:23:08.180392  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:23:08.180502  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:23:08.193008  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:23:08.221327  407433 start.go:296] duration metric: took 135.910859ms for postStartSetup
	I1007 12:23:08.221405  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.221767  407433 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 12:23:08.221797  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:08.224699  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.225137  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.225168  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.225344  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:08.225570  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.225756  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:08.225877  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:23:08.311077  407433 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1007 12:23:08.311172  407433 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1007 12:23:08.370424  407433 fix.go:56] duration metric: took 19.600489752s for fixHost
	I1007 12:23:08.370480  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:08.373852  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.374234  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.374267  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.374431  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:08.374676  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.374884  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.375076  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:08.375312  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:23:08.375552  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.149 22 <nil> <nil>}
	I1007 12:23:08.375573  407433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:23:08.484001  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728303788.441688765
	
	I1007 12:23:08.484028  407433 fix.go:216] guest clock: 1728303788.441688765
	I1007 12:23:08.484036  407433 fix.go:229] Guest: 2024-10-07 12:23:08.441688765 +0000 UTC Remote: 2024-10-07 12:23:08.370456366 +0000 UTC m=+402.287892272 (delta=71.232399ms)
	I1007 12:23:08.484062  407433 fix.go:200] guest clock delta is within tolerance: 71.232399ms
	I1007 12:23:08.484071  407433 start.go:83] releasing machines lock for "ha-628553-m03", held for 19.714158797s
	I1007 12:23:08.484104  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.484386  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:23:08.487120  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.487548  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.487576  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.489523  407433 out.go:177] * Found network options:
	I1007 12:23:08.490983  407433 out.go:177]   - NO_PROXY=192.168.39.110,192.168.39.169
	W1007 12:23:08.492418  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:23:08.492450  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:23:08.492471  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.493243  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.493459  407433 main.go:141] libmachine: (ha-628553-m03) Calling .DriverName
	I1007 12:23:08.493570  407433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:23:08.493623  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	W1007 12:23:08.493646  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:23:08.493673  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:23:08.493743  407433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:23:08.493761  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHHostname
	I1007 12:23:08.496386  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.496480  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.496868  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.496897  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.496924  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:08.496943  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:08.497079  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:08.497346  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.497377  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHPort
	I1007 12:23:08.497541  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:08.497696  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHKeyPath
	I1007 12:23:08.497840  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:23:08.497866  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetSSHUsername
	I1007 12:23:08.498023  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m03/id_rsa Username:docker}
	I1007 12:23:08.730433  407433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:23:08.737080  407433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:23:08.737155  407433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:23:08.755299  407433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:23:08.755325  407433 start.go:495] detecting cgroup driver to use...
	I1007 12:23:08.755389  407433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:23:08.780038  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:23:08.795377  407433 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:23:08.795440  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:23:08.811910  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:23:08.828314  407433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:23:08.951245  407433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:23:09.115120  407433 docker.go:233] disabling docker service ...
	I1007 12:23:09.115225  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:23:09.133356  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:23:09.148971  407433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:23:09.293835  407433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:23:09.423867  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:23:09.439087  407433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:23:09.458897  407433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:23:09.459001  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.469902  407433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:23:09.469994  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.481722  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.492505  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.505280  407433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:23:09.518945  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.530830  407433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.554731  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:23:09.569925  407433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:23:09.580795  407433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:23:09.580888  407433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:23:09.597673  407433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:23:09.612157  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:23:09.766539  407433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:23:09.880706  407433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:23:09.880792  407433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:23:09.885746  407433 start.go:563] Will wait 60s for crictl version
	I1007 12:23:09.885814  407433 ssh_runner.go:195] Run: which crictl
	I1007 12:23:09.889812  407433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:23:09.937961  407433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:23:09.938036  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:23:09.967760  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:23:09.998712  407433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:23:10.000182  407433 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:23:10.001820  407433 out.go:177]   - env NO_PROXY=192.168.39.110,192.168.39.169
	I1007 12:23:10.003205  407433 main.go:141] libmachine: (ha-628553-m03) Calling .GetIP
	I1007 12:23:10.006117  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:10.006523  407433 main.go:141] libmachine: (ha-628553-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:9f:34", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:23:00 +0000 UTC Type:0 Mac:52:54:00:3c:9f:34 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:ha-628553-m03 Clientid:01:52:54:00:3c:9f:34}
	I1007 12:23:10.006555  407433 main.go:141] libmachine: (ha-628553-m03) DBG | domain ha-628553-m03 has defined IP address 192.168.39.149 and MAC address 52:54:00:3c:9f:34 in network mk-ha-628553
	I1007 12:23:10.006741  407433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:23:10.011690  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:23:10.025541  407433 mustload.go:65] Loading cluster: ha-628553
	I1007 12:23:10.025766  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:23:10.026028  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:23:10.026071  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:23:10.041914  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I1007 12:23:10.042428  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:23:10.042951  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:23:10.042983  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:23:10.043362  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:23:10.043554  407433 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:23:10.045158  407433 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:23:10.045562  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:23:10.045608  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:23:10.083352  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I1007 12:23:10.083776  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:23:10.084261  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:23:10.084287  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:23:10.084725  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:23:10.084948  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:23:10.085117  407433 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.149
	I1007 12:23:10.085130  407433 certs.go:194] generating shared ca certs ...
	I1007 12:23:10.085148  407433 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:23:10.085306  407433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:23:10.085370  407433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:23:10.085384  407433 certs.go:256] generating profile certs ...
	I1007 12:23:10.085494  407433 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key
	I1007 12:23:10.085567  407433 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key.e74801e5
	I1007 12:23:10.085617  407433 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key
	I1007 12:23:10.085634  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:23:10.085655  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:23:10.085672  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:23:10.085688  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:23:10.085710  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:23:10.085739  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:23:10.085758  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:23:10.085776  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:23:10.085842  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:23:10.085885  407433 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:23:10.085899  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:23:10.085932  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:23:10.085965  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:23:10.085997  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:23:10.086048  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:23:10.086084  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:23:10.086104  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:23:10.086121  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:23:10.086157  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHHostname
	I1007 12:23:10.089488  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:23:10.089878  407433 main.go:141] libmachine: (ha-628553) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:fd", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:21:35 +0000 UTC Type:0 Mac:52:54:00:7b:12:fd Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-628553 Clientid:01:52:54:00:7b:12:fd}
	I1007 12:23:10.089907  407433 main.go:141] libmachine: (ha-628553) DBG | domain ha-628553 has defined IP address 192.168.39.110 and MAC address 52:54:00:7b:12:fd in network mk-ha-628553
	I1007 12:23:10.090103  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHPort
	I1007 12:23:10.090299  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHKeyPath
	I1007 12:23:10.090474  407433 main.go:141] libmachine: (ha-628553) Calling .GetSSHUsername
	I1007 12:23:10.090656  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553/id_rsa Username:docker}
	I1007 12:23:10.163437  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1007 12:23:10.168780  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1007 12:23:10.180806  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1007 12:23:10.185150  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1007 12:23:10.198300  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1007 12:23:10.203414  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1007 12:23:10.216836  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1007 12:23:10.222330  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1007 12:23:10.234652  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1007 12:23:10.239420  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1007 12:23:10.252193  407433 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1007 12:23:10.256802  407433 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1007 12:23:10.268584  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:23:10.295050  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:23:10.320755  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:23:10.347772  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:23:10.373490  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1007 12:23:10.399842  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 12:23:10.425371  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:23:10.452365  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:23:10.479533  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:23:10.504233  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:23:10.528470  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:23:10.553208  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1007 12:23:10.571603  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1007 12:23:10.591578  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1007 12:23:10.614225  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1007 12:23:10.634324  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1007 12:23:10.653367  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1007 12:23:10.670424  407433 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1007 12:23:10.687921  407433 ssh_runner.go:195] Run: openssl version
	I1007 12:23:10.693659  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:23:10.705376  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:23:10.710726  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:23:10.710791  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:23:10.718248  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:23:10.732612  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:23:10.745398  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:23:10.750153  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:23:10.750214  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:23:10.756370  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:23:10.768784  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:23:10.780787  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:23:10.785548  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:23:10.785622  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:23:10.791760  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:23:10.803743  407433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:23:10.808736  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:23:10.814899  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:23:10.821143  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:23:10.827606  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:23:10.833912  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:23:10.840134  407433 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:23:10.846577  407433 kubeadm.go:934] updating node {m03 192.168.39.149 8443 v1.31.1 crio true true} ...
	I1007 12:23:10.846676  407433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:23:10.846714  407433 kube-vip.go:115] generating kube-vip config ...
	I1007 12:23:10.846760  407433 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1007 12:23:10.864581  407433 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1007 12:23:10.864668  407433 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1007 12:23:10.864739  407433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:23:10.875792  407433 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:23:10.875886  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1007 12:23:10.886447  407433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:23:10.904363  407433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:23:10.922695  407433 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1007 12:23:10.940459  407433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:23:10.944764  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:23:10.958113  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:23:11.105627  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:23:11.125550  407433 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.149 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:23:11.125888  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:23:11.127716  407433 out.go:177] * Verifying Kubernetes components...
	I1007 12:23:11.129145  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:23:11.305386  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:23:11.325083  407433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:23:11.325389  407433 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:23:11.325462  407433 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:23:11.325756  407433 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m03" to be "Ready" ...
	I1007 12:23:11.325833  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:11.325841  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:11.325849  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:11.325852  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:11.329984  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:11.826772  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:11.826797  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:11.826807  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:11.826812  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:11.831688  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:11.832196  407433 node_ready.go:49] node "ha-628553-m03" has status "Ready":"True"
	I1007 12:23:11.832220  407433 node_ready.go:38] duration metric: took 506.44323ms for node "ha-628553-m03" to be "Ready" ...
	I1007 12:23:11.832245  407433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:23:11.832336  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:23:11.832347  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:11.832358  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:11.832365  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:11.848204  407433 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1007 12:23:11.861310  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:11.861435  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:11.861446  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:11.861458  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:11.861466  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:11.870384  407433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:23:11.871506  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:11.871524  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:11.871535  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:11.871541  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:11.877552  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:23:12.361651  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:12.361681  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:12.361692  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:12.361698  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:12.365468  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:12.366272  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:12.366289  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:12.366297  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:12.366302  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:12.369324  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:12.862243  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:12.862279  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:12.862291  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:12.862297  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:12.867091  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:12.868063  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:12.868089  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:12.868100  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:12.868106  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:12.871897  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:13.361624  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:13.361649  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:13.361658  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:13.361662  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:13.365490  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:13.366348  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:13.366364  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:13.366373  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:13.366377  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:13.369870  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:13.862332  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:13.862356  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:13.862365  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:13.862368  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:13.866523  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:13.867251  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:13.867269  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:13.867277  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:13.867282  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:13.870634  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:13.871447  407433 pod_ready.go:103] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:14.362193  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:14.362228  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:14.362240  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:14.362245  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:14.366181  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:14.367066  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:14.367088  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:14.367100  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:14.367106  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:14.370503  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:14.862599  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:14.862626  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:14.862640  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:14.862646  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:14.867107  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:14.867797  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:14.867817  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:14.867825  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:14.867830  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:14.871636  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:15.362516  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:15.362542  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:15.362550  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:15.362585  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:15.366026  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:15.366840  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:15.366856  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:15.366863  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:15.366868  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:15.369831  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:15.861611  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:15.861634  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:15.861642  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:15.861647  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:15.866159  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:15.866896  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:15.866915  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:15.866922  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:15.866927  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:15.870596  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:16.361830  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:16.361863  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:16.361872  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:16.361876  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:16.366367  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:16.367293  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:16.367315  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:16.367327  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:16.367332  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:16.371071  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:16.371636  407433 pod_ready.go:103] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:16.862048  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:16.862076  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:16.862086  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:16.862092  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:16.866414  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:16.867130  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:16.867151  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:16.867163  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:16.867167  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:16.870850  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:17.362394  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:17.362418  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:17.362426  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:17.362430  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:17.366486  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:17.367294  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:17.367312  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:17.367320  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:17.367324  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:17.371106  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:17.862513  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:17.862539  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:17.862548  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:17.862554  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:17.866633  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:17.867337  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:17.867354  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:17.867363  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:17.867367  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:17.870721  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:18.361539  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:18.361562  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:18.361573  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:18.361578  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:18.365313  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:18.366026  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:18.366043  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:18.366053  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:18.366058  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:18.369343  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:18.861585  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:18.861610  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:18.861618  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:18.861621  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:18.865321  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:18.866215  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:18.866239  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:18.866250  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:18.866254  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:18.869184  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:18.869834  407433 pod_ready.go:103] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:19.361628  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:19.361652  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:19.361661  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:19.361665  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:19.365480  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:19.367101  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:19.367123  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:19.367137  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:19.367143  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:19.370524  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:19.861746  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:19.861771  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:19.861780  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:19.861785  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:19.865697  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:19.866576  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:19.866601  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:19.866613  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:19.866621  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:19.869999  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.362008  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:20.362035  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.362046  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.362052  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.365798  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.366543  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:20.366570  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.366583  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.366588  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.370420  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.862465  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:23:20.862494  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.862506  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.862512  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.866743  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:20.867603  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:20.867625  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.867637  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.867646  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.876747  407433 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1007 12:23:20.877196  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:20.877215  407433 pod_ready.go:82] duration metric: took 9.015873885s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.877228  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.877303  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:23:20.877313  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.877323  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.877329  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.880598  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.881340  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:20.881359  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.881367  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.881373  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.884755  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.885234  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:20.885253  407433 pod_ready.go:82] duration metric: took 8.017506ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.885264  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.885338  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:23:20.885346  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.885356  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.885363  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.888642  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.889384  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:20.889408  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.889417  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.889423  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.892846  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.893352  407433 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:20.893371  407433 pod_ready.go:82] duration metric: took 8.101384ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.893381  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.893450  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:23:20.893457  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.893465  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.893469  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.896263  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:20.897009  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:20.897028  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.897039  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.897045  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.900030  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:20.900719  407433 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:20.900743  407433 pod_ready.go:82] duration metric: took 7.354933ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.900758  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:20.900849  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:20.900859  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.900870  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.900878  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.904334  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:20.905453  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:20.905472  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:20.905483  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:20.905489  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:20.908818  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:21.401777  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:21.401802  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:21.401810  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:21.401816  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:21.405454  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:21.406241  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:21.406263  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:21.406275  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:21.406281  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:21.409714  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:21.901278  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:21.901305  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:21.901318  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:21.901322  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:21.905374  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:21.906206  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:21.906228  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:21.906239  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:21.906245  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:21.909773  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:22.401497  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:22.401525  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:22.401536  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:22.401541  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:22.405874  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:22.407120  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:22.407144  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:22.407155  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:22.407161  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:22.413762  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:23:22.901518  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:22.901544  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:22.901552  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:22.901557  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:22.906234  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:22.907167  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:22.907190  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:22.907200  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:22.907205  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:22.910393  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:22.910825  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:23.401248  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:23.401280  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:23.401293  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:23.401298  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:23.407107  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:23:23.408075  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:23.408096  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:23.408106  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:23.408111  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:23.415961  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:23:23.901287  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:23.901319  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:23.901331  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:23.901337  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:23.905904  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:23.906565  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:23.906581  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:23.906590  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:23.906595  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:23.910006  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:24.401161  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:24.401190  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:24.401202  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:24.401209  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:24.404839  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:24.405564  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:24.405583  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:24.405593  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:24.405598  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:24.408750  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:24.901100  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:24.901137  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:24.901151  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:24.901156  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:24.905321  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:24.906076  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:24.906098  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:24.906110  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:24.906116  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:24.909394  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:25.402019  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:25.402048  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:25.402060  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:25.402066  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:25.406096  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:25.406780  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:25.406800  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:25.406811  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:25.406817  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:25.410372  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:25.411120  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:25.901431  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:25.901462  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:25.901476  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:25.901485  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:25.905181  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:25.905913  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:25.905932  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:25.905943  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:25.905948  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:25.909147  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:26.401392  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:26.401413  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:26.401422  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:26.401425  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:26.404670  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:26.405486  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:26.405509  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:26.405524  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:26.405531  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:26.408669  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:26.901799  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:26.901824  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:26.901836  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:26.901841  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:26.905889  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:26.906791  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:26.906814  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:26.906825  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:26.906833  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:26.910509  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:27.401061  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:27.401091  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:27.401101  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:27.401107  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:27.404502  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:27.405491  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:27.405516  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:27.405531  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:27.405537  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:27.408535  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:27.901655  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:27.901682  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:27.901693  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:27.901698  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:27.906766  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:23:27.907910  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:27.907930  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:27.907943  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:27.907949  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:27.910831  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:27.911452  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:28.401384  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:28.401412  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:28.401421  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:28.401426  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:28.405773  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:28.406658  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:28.406679  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:28.406690  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:28.406697  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:28.409901  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:28.901341  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:28.901371  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:28.901380  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:28.901389  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:28.905747  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:28.907292  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:28.907314  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:28.907326  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:28.907331  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:28.910952  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:29.401668  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:29.401703  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:29.401714  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:29.401719  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:29.405845  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:29.406720  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:29.406742  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:29.406753  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:29.406757  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:29.409965  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:29.901326  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:29.901360  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:29.901369  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:29.901373  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:29.905350  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:29.906192  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:29.906223  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:29.906235  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:29.906243  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:29.910387  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:30.401772  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:30.401801  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:30.401813  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:30.401819  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:30.406389  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:30.407392  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:30.407416  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:30.407429  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:30.407436  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:30.410951  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:30.411545  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:30.901925  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:30.901958  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:30.901970  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:30.901977  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:30.905611  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:30.906422  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:30.906444  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:30.906455  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:30.906460  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:30.910537  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:31.401800  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:31.401827  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:31.401836  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:31.401840  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:31.406134  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:31.407148  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:31.407173  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:31.407191  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:31.407197  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:31.410926  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:31.901827  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:31.901858  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:31.901870  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:31.901878  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:31.906665  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:31.907501  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:31.907537  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:31.907549  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:31.907555  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:31.911140  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:32.400976  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:32.401003  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:32.401014  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:32.401019  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:32.405547  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:32.406242  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:32.406258  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:32.406265  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:32.406269  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:32.409439  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:32.901149  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:32.901181  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:32.901193  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:32.901198  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:32.905022  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:32.905716  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:32.905734  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:32.905744  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:32.905748  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:32.909130  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:32.909906  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:33.401283  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:33.401309  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:33.401318  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:33.401325  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:33.404886  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:33.405856  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:33.405881  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:33.405893  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:33.405901  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:33.409501  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:33.901882  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:33.901915  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:33.901925  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:33.901928  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:33.905378  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:33.906066  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:33.906083  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:33.906091  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:33.906095  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:33.909006  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:34.401235  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:34.401260  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:34.401269  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:34.401272  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:34.404838  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:34.406012  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:34.406031  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:34.406039  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:34.406045  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:34.409124  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:34.900951  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:34.900983  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:34.900993  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:34.900997  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:34.905147  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:34.905757  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:34.905776  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:34.905787  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:34.905794  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:34.908507  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:35.401863  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:35.401890  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:35.401901  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:35.401906  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:35.405387  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:35.406401  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:35.406446  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:35.406455  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:35.406459  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:35.409292  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:35.409806  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:35.901149  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:35.901196  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:35.901221  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:35.901229  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:35.904816  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:35.905574  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:35.905592  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:35.905602  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:35.905609  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:35.908238  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:36.401546  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:36.401580  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:36.401593  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:36.401598  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:36.405148  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:36.406022  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:36.406039  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:36.406048  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:36.406056  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:36.408821  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:36.901819  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:36.901855  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:36.901867  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:36.901876  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:36.905550  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:36.906357  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:36.906377  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:36.906387  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:36.906391  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:36.909398  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:37.401226  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:37.401258  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:37.401271  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:37.401279  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:37.406353  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:23:37.406945  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:37.406977  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:37.406989  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:37.406998  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:37.410073  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:37.410643  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:37.901879  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:37.901906  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:37.901917  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:37.901922  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:37.906062  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:37.906861  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:37.906877  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:37.906888  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:37.906895  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:37.910684  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:38.401696  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:38.401722  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:38.401731  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:38.401734  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:38.406385  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:38.407114  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:38.407137  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:38.407145  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:38.407150  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:38.410220  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:38.901333  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:38.901362  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:38.901371  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:38.901375  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:38.905673  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:38.906342  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:38.906358  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:38.906367  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:38.906372  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:38.909538  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:39.401617  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:39.401647  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:39.401658  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:39.401665  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:39.405325  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:39.406247  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:39.406263  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:39.406271  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:39.406275  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:39.408869  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:39.901009  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:39.901066  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:39.901079  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:39.901088  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:39.905186  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:39.906287  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:39.906303  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:39.906312  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:39.906316  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:39.909386  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:39.909910  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:40.401887  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:40.401919  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:40.401932  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:40.401938  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:40.405563  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:40.406179  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:40.406196  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:40.406204  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:40.406207  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:40.409031  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:40.901914  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:40.901947  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:40.901959  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:40.901964  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:40.905577  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:40.906160  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:40.906178  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:40.906187  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:40.906192  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:40.909439  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:41.401762  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:41.401788  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:41.401796  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:41.401801  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:41.405508  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:41.406192  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:41.406210  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:41.406219  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:41.406222  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:41.409156  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:41.901039  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:41.901069  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:41.901082  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:41.901088  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:41.904730  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:41.905638  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:41.905657  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:41.905667  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:41.905672  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:41.908263  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:42.401627  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:42.401654  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:42.401663  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:42.401668  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:42.406041  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:42.406703  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:42.406722  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:42.406730  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:42.406734  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:42.409745  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:42.410239  407433 pod_ready.go:103] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"False"
	I1007 12:23:42.901587  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:42.901614  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:42.901622  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:42.901625  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:42.905571  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:42.906277  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:42.906295  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:42.906303  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:42.906307  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:42.909677  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:43.401661  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:43.401689  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:43.401697  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:43.401703  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:43.405188  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:43.406046  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:43.406065  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:43.406073  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:43.406077  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:43.409716  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:43.901229  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:43.901256  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:43.901263  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:43.901268  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:43.905115  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:43.905912  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:43.905929  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:43.905937  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:43.905941  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:43.908930  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:44.401253  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:23:44.401281  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.401293  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.401297  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.405017  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.406070  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:44.406089  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.406097  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.406101  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.409080  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:44.409563  407433 pod_ready.go:93] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.409581  407433 pod_ready.go:82] duration metric: took 23.50881715s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.409602  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.409726  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:23:44.409737  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.409744  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.409749  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.412715  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:44.413235  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:44.413247  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.413255  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.413258  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.416010  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:44.416481  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.416502  407433 pod_ready.go:82] duration metric: took 6.890773ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.416513  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.416581  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:23:44.416590  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.416598  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.416603  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.419667  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.420424  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:44.420458  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.420470  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.420476  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.423889  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.424313  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.424334  407433 pod_ready.go:82] duration metric: took 7.814307ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.424348  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.424417  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:23:44.424427  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.424437  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.424444  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.428190  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.428882  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:44.428900  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.428911  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.428918  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.431588  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:23:44.432108  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.432137  407433 pod_ready.go:82] duration metric: took 7.779602ms for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.432151  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.432238  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:23:44.432249  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.432260  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.432266  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.435639  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.436600  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:44.436617  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.436626  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.436630  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.440567  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.441253  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.441273  407433 pod_ready.go:82] duration metric: took 9.114345ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.441284  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.601679  407433 request.go:632] Waited for 160.319206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:23:44.601747  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:23:44.601755  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.601764  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.601768  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.605498  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:44.801775  407433 request.go:632] Waited for 195.353982ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:44.801836  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:44.801841  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:44.801849  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:44.801854  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:44.805954  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:44.806553  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:44.806577  407433 pod_ready.go:82] duration metric: took 365.285871ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:44.806590  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:45.002112  407433 request.go:632] Waited for 195.437696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:23:45.002184  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:23:45.002191  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:45.002201  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:45.002211  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:45.006294  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:45.202056  407433 request.go:632] Waited for 194.857504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:45.202132  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:45.202139  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:45.202151  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:45.202157  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:45.205444  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:45.402165  407433 request.go:632] Waited for 95.263491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:23:45.402239  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:23:45.402248  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:45.402258  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:45.402264  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:45.409336  407433 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1007 12:23:45.601415  407433 request.go:632] Waited for 191.299121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:45.601498  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:45.601503  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:45.601511  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:45.601518  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:45.604967  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:45.605527  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:45.605552  407433 pod_ready.go:82] duration metric: took 798.95512ms for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:45.605564  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:45.802045  407433 request.go:632] Waited for 196.398573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:23:45.802132  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:23:45.802140  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:45.802150  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:45.802158  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:45.806194  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:46.001912  407433 request.go:632] Waited for 194.996337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:46.001992  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:46.001999  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:46.002009  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:46.002025  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:46.005973  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:46.006462  407433 pod_ready.go:93] pod "kube-proxy-956k4" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:46.006490  407433 pod_ready.go:82] duration metric: took 400.920874ms for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:46.006503  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:46.201881  407433 request.go:632] Waited for 195.304463ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:23:46.201942  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:23:46.201948  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:46.201955  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:46.201960  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:46.205784  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:46.402338  407433 request.go:632] Waited for 195.651209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:23:46.402414  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:23:46.402420  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:46.402429  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:46.402433  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:46.405950  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:46.406728  407433 pod_ready.go:98] node "ha-628553-m04" hosting pod "kube-proxy-fkzqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-628553-m04" has status "Ready":"Unknown"
	I1007 12:23:46.406754  407433 pod_ready.go:82] duration metric: took 400.24566ms for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	E1007 12:23:46.406764  407433 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-628553-m04" hosting pod "kube-proxy-fkzqr" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-628553-m04" has status "Ready":"Unknown"
	I1007 12:23:46.406771  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:46.601841  407433 request.go:632] Waited for 194.991422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:23:46.601928  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:23:46.601934  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:46.601942  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:46.601950  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:46.606194  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:46.802178  407433 request.go:632] Waited for 195.348094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:46.802282  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:46.802291  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:46.802300  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:46.802307  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:46.806011  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:46.806717  407433 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:46.806740  407433 pod_ready.go:82] duration metric: took 399.962338ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:46.806751  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:47.001862  407433 request.go:632] Waited for 195.011199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:23:47.001951  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:23:47.001958  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:47.001970  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:47.001976  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:47.005786  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:47.202192  407433 request.go:632] Waited for 195.404826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:47.202272  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:47.202278  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:47.202289  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:47.202296  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:47.205737  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:47.206263  407433 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:47.206285  407433 pod_ready.go:82] duration metric: took 399.527218ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:47.206296  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:47.401271  407433 request.go:632] Waited for 194.871758ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:23:47.401377  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:23:47.401387  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:47.401398  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:47.401407  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:47.405036  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:47.602200  407433 request.go:632] Waited for 196.363571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:47.602263  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:23:47.602270  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:47.602281  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:47.602286  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:47.606027  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:47.606573  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:47.606596  407433 pod_ready.go:82] duration metric: took 400.293688ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:47.606608  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:47.801693  407433 request.go:632] Waited for 194.969862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:23:47.801777  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:23:47.801786  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:47.801799  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:47.801809  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:47.805884  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:48.002025  407433 request.go:632] Waited for 195.383914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:48.002106  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:23:48.002112  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.002122  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.002129  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.006411  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:48.007140  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:48.007161  407433 pod_ready.go:82] duration metric: took 400.547189ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:48.007171  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:48.202325  407433 request.go:632] Waited for 195.078729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:23:48.202388  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:23:48.202393  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.202401  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.202413  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.207192  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:48.402151  407433 request.go:632] Waited for 193.426943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:48.402240  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:23:48.402248  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.402260  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.402270  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.406156  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:48.406819  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:23:48.406846  407433 pod_ready.go:82] duration metric: took 399.667367ms for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:23:48.406866  407433 pod_ready.go:39] duration metric: took 36.574596709s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:23:48.406888  407433 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:23:48.406948  407433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:23:48.432065  407433 api_server.go:72] duration metric: took 37.306445342s to wait for apiserver process to appear ...
	I1007 12:23:48.432098  407433 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:23:48.432125  407433 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I1007 12:23:48.439718  407433 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I1007 12:23:48.439838  407433 round_trippers.go:463] GET https://192.168.39.110:8443/version
	I1007 12:23:48.439852  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.439865  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.439875  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.440922  407433 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1007 12:23:48.441046  407433 api_server.go:141] control plane version: v1.31.1
	I1007 12:23:48.441083  407433 api_server.go:131] duration metric: took 8.977422ms to wait for apiserver health ...
	I1007 12:23:48.441105  407433 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:23:48.601351  407433 request.go:632] Waited for 160.153035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:23:48.601433  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:23:48.601449  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.601460  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.601466  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.608187  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:23:48.616433  407433 system_pods.go:59] 26 kube-system pods found
	I1007 12:23:48.616470  407433 system_pods.go:61] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:23:48.616475  407433 system_pods.go:61] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:23:48.616479  407433 system_pods.go:61] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:23:48.616489  407433 system_pods.go:61] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:23:48.616492  407433 system_pods.go:61] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:23:48.616527  407433 system_pods.go:61] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:23:48.616535  407433 system_pods.go:61] "kindnet-rwk2r" [8ec7b1f3-d6b5-4e44-8574-c197eb45bf28] Running
	I1007 12:23:48.616543  407433 system_pods.go:61] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:23:48.616547  407433 system_pods.go:61] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:23:48.616550  407433 system_pods.go:61] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:23:48.616554  407433 system_pods.go:61] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:23:48.616557  407433 system_pods.go:61] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:23:48.616561  407433 system_pods.go:61] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:23:48.616566  407433 system_pods.go:61] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:23:48.616570  407433 system_pods.go:61] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:23:48.616575  407433 system_pods.go:61] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:23:48.616578  407433 system_pods.go:61] "kube-proxy-fkzqr" [16f7acfc-13b5-426d-9b0a-59a5131fc297] Running
	I1007 12:23:48.616582  407433 system_pods.go:61] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:23:48.616585  407433 system_pods.go:61] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:23:48.616588  407433 system_pods.go:61] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:23:48.616595  407433 system_pods.go:61] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:23:48.616600  407433 system_pods.go:61] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:23:48.616603  407433 system_pods.go:61] "kube-vip-ha-628553" [56148ec7-dffa-4dfc-8414-c9feb65b09d3] Running
	I1007 12:23:48.616607  407433 system_pods.go:61] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:23:48.616612  407433 system_pods.go:61] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:23:48.616616  407433 system_pods.go:61] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:23:48.616621  407433 system_pods.go:74] duration metric: took 175.509164ms to wait for pod list to return data ...
	I1007 12:23:48.616631  407433 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:23:48.802229  407433 request.go:632] Waited for 185.508899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:23:48.802303  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/default/serviceaccounts
	I1007 12:23:48.802312  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:48.802321  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:48.802329  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:48.806434  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:23:48.806628  407433 default_sa.go:45] found service account: "default"
	I1007 12:23:48.806657  407433 default_sa.go:55] duration metric: took 190.017985ms for default service account to be created ...
	I1007 12:23:48.806671  407433 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:23:49.002213  407433 request.go:632] Waited for 195.441972ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:23:49.002280  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:23:49.002285  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:49.002293  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:49.002296  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:49.008706  407433 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1007 12:23:49.017329  407433 system_pods.go:86] 26 kube-system pods found
	I1007 12:23:49.017363  407433 system_pods.go:89] "coredns-7c65d6cfc9-ktmzq" [fda6ae24-5407-4f63-9a56-29fa9eba8966] Running
	I1007 12:23:49.017374  407433 system_pods.go:89] "coredns-7c65d6cfc9-rsr6v" [60fd800f-38f1-40d5-9ecf-cbf21bf5add6] Running
	I1007 12:23:49.017378  407433 system_pods.go:89] "etcd-ha-628553" [3579a80c-5e63-4d4c-adc9-73cac073a802] Running
	I1007 12:23:49.017382  407433 system_pods.go:89] "etcd-ha-628553-m02" [0fefb6ea-76e1-401a-88d6-a423e399438e] Running
	I1007 12:23:49.017385  407433 system_pods.go:89] "etcd-ha-628553-m03" [dc936fce-fbe1-43c1-86cd-25d7fa4594cf] Running
	I1007 12:23:49.017392  407433 system_pods.go:89] "kindnet-9rq2w" [e4fcdbc2-6109-43b9-a499-c191e251a062] Running
	I1007 12:23:49.017396  407433 system_pods.go:89] "kindnet-rwk2r" [8ec7b1f3-d6b5-4e44-8574-c197eb45bf28] Running
	I1007 12:23:49.017399  407433 system_pods.go:89] "kindnet-sb4xd" [a9f248cb-dc39-4ccd-8424-c44d1042d9e0] Running
	I1007 12:23:49.017403  407433 system_pods.go:89] "kindnet-snf5v" [a6360ec2-8f69-454b-9bfc-d636ebd8b372] Running
	I1007 12:23:49.017406  407433 system_pods.go:89] "kube-apiserver-ha-628553" [7fe040c4-6be7-4883-ab71-bace9009cbb5] Running
	I1007 12:23:49.017410  407433 system_pods.go:89] "kube-apiserver-ha-628553-m02" [9511205f-2fa0-4044-bd8f-e59f419fb2c1] Running
	I1007 12:23:49.017413  407433 system_pods.go:89] "kube-apiserver-ha-628553-m03" [08a932b8-4589-4267-b780-f6442593caa6] Running
	I1007 12:23:49.017417  407433 system_pods.go:89] "kube-controller-manager-ha-628553" [845ea9a6-3f13-4674-bfb3-b7f12324cd5a] Running
	I1007 12:23:49.017420  407433 system_pods.go:89] "kube-controller-manager-ha-628553-m02" [0628a3a5-160a-4061-8567-443893e1330b] Running
	I1007 12:23:49.017424  407433 system_pods.go:89] "kube-controller-manager-ha-628553-m03" [1db156a1-6009-41ad-9ef8-6cb82086e86f] Running
	I1007 12:23:49.017429  407433 system_pods.go:89] "kube-proxy-956k4" [6e4b7d91-62fc-431d-9b10-cca1155729da] Running
	I1007 12:23:49.017436  407433 system_pods.go:89] "kube-proxy-fkzqr" [16f7acfc-13b5-426d-9b0a-59a5131fc297] Running
	I1007 12:23:49.017439  407433 system_pods.go:89] "kube-proxy-h6vg8" [97dd82f4-8e31-4569-b762-fc804d08efb0] Running
	I1007 12:23:49.017442  407433 system_pods.go:89] "kube-proxy-s5c6d" [168add56-da7e-45fd-bc2b-028a2c1b54ea] Running
	I1007 12:23:49.017446  407433 system_pods.go:89] "kube-scheduler-ha-628553" [12384484-62b8-4880-9f09-0410442a2cd1] Running
	I1007 12:23:49.017449  407433 system_pods.go:89] "kube-scheduler-ha-628553-m02" [8b629623-bd07-491f-9662-06e24ac3453f] Running
	I1007 12:23:49.017452  407433 system_pods.go:89] "kube-scheduler-ha-628553-m03" [c430369f-f44c-485c-927e-220bab0078d3] Running
	I1007 12:23:49.017460  407433 system_pods.go:89] "kube-vip-ha-628553" [56148ec7-dffa-4dfc-8414-c9feb65b09d3] Running
	I1007 12:23:49.017466  407433 system_pods.go:89] "kube-vip-ha-628553-m02" [b652ac92-da69-4189-96ec-ad409610464c] Running
	I1007 12:23:49.017469  407433 system_pods.go:89] "kube-vip-ha-628553-m03" [82826fa8-ab2c-42d7-8c2d-1c6261c25c35] Running
	I1007 12:23:49.017472  407433 system_pods.go:89] "storage-provisioner" [f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0] Running
	I1007 12:23:49.017478  407433 system_pods.go:126] duration metric: took 210.798472ms to wait for k8s-apps to be running ...
	I1007 12:23:49.017486  407433 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:23:49.017535  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:23:49.032835  407433 system_svc.go:56] duration metric: took 15.336372ms WaitForService to wait for kubelet
	I1007 12:23:49.032876  407433 kubeadm.go:582] duration metric: took 37.907263247s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:23:49.032902  407433 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:23:49.201334  407433 request.go:632] Waited for 168.278903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:23:49.201430  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:23:49.201441  407433 round_trippers.go:469] Request Headers:
	I1007 12:23:49.201453  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:23:49.201463  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:23:49.205415  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:23:49.206770  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:23:49.206795  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:23:49.206820  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:23:49.206824  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:23:49.206828  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:23:49.206831  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:23:49.206834  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:23:49.206837  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:23:49.206841  407433 node_conditions.go:105] duration metric: took 173.93387ms to run NodePressure ...
	I1007 12:23:49.206856  407433 start.go:241] waiting for startup goroutines ...
	I1007 12:23:49.206880  407433 start.go:255] writing updated cluster config ...
	I1007 12:23:49.209205  407433 out.go:201] 
	I1007 12:23:49.210753  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:23:49.210885  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:23:49.212476  407433 out.go:177] * Starting "ha-628553-m04" worker node in "ha-628553" cluster
	I1007 12:23:49.213667  407433 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:23:49.213695  407433 cache.go:56] Caching tarball of preloaded images
	I1007 12:23:49.213837  407433 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:23:49.213856  407433 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:23:49.213989  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:23:49.214215  407433 start.go:360] acquireMachinesLock for ha-628553-m04: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:23:49.214284  407433 start.go:364] duration metric: took 33.583µs to acquireMachinesLock for "ha-628553-m04"
	I1007 12:23:49.214305  407433 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:23:49.214322  407433 fix.go:54] fixHost starting: m04
	I1007 12:23:49.214728  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:23:49.214773  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:23:49.230817  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44297
	I1007 12:23:49.231251  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:23:49.231746  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:23:49.231765  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:23:49.232170  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:23:49.232389  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:23:49.232578  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetState
	I1007 12:23:49.234354  407433 fix.go:112] recreateIfNeeded on ha-628553-m04: state=Stopped err=<nil>
	I1007 12:23:49.234381  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	W1007 12:23:49.234559  407433 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:23:49.236807  407433 out.go:177] * Restarting existing kvm2 VM for "ha-628553-m04" ...
	I1007 12:23:49.238021  407433 main.go:141] libmachine: (ha-628553-m04) Calling .Start
	I1007 12:23:49.238250  407433 main.go:141] libmachine: (ha-628553-m04) Ensuring networks are active...
	I1007 12:23:49.239018  407433 main.go:141] libmachine: (ha-628553-m04) Ensuring network default is active
	I1007 12:23:49.239377  407433 main.go:141] libmachine: (ha-628553-m04) Ensuring network mk-ha-628553 is active
	I1007 12:23:49.239771  407433 main.go:141] libmachine: (ha-628553-m04) Getting domain xml...
	I1007 12:23:49.240336  407433 main.go:141] libmachine: (ha-628553-m04) Creating domain...
	I1007 12:23:50.530662  407433 main.go:141] libmachine: (ha-628553-m04) Waiting to get IP...
	I1007 12:23:50.531807  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:50.532326  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:50.532394  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:50.532303  409719 retry.go:31] will retry after 303.800673ms: waiting for machine to come up
	I1007 12:23:50.838195  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:50.838893  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:50.838921  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:50.838836  409719 retry.go:31] will retry after 239.89794ms: waiting for machine to come up
	I1007 12:23:51.080318  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:51.080882  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:51.080918  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:51.080819  409719 retry.go:31] will retry after 362.373785ms: waiting for machine to come up
	I1007 12:23:51.445366  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:51.445901  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:51.445933  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:51.445831  409719 retry.go:31] will retry after 593.154236ms: waiting for machine to come up
	I1007 12:23:52.040581  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:52.040920  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:52.040951  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:52.040850  409719 retry.go:31] will retry after 511.859545ms: waiting for machine to come up
	I1007 12:23:52.554682  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:52.555211  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:52.555242  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:52.555144  409719 retry.go:31] will retry after 783.145525ms: waiting for machine to come up
	I1007 12:23:53.340031  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:53.340503  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:53.340534  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:53.340434  409719 retry.go:31] will retry after 890.686855ms: waiting for machine to come up
	I1007 12:23:54.233201  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:54.233851  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:54.233881  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:54.233799  409719 retry.go:31] will retry after 1.106716095s: waiting for machine to come up
	I1007 12:23:55.341582  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:55.342089  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:55.342118  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:55.342042  409719 retry.go:31] will retry after 1.803926987s: waiting for machine to come up
	I1007 12:23:57.148067  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:57.148434  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:57.148461  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:57.148414  409719 retry.go:31] will retry after 1.623538456s: waiting for machine to come up
	I1007 12:23:58.773300  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:23:58.773907  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:23:58.773939  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:23:58.773829  409719 retry.go:31] will retry after 2.479088328s: waiting for machine to come up
	I1007 12:24:01.254457  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:01.254920  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:24:01.254943  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:24:01.254879  409719 retry.go:31] will retry after 3.27298755s: waiting for machine to come up
	I1007 12:24:04.529276  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:04.529763  407433 main.go:141] libmachine: (ha-628553-m04) DBG | unable to find current IP address of domain ha-628553-m04 in network mk-ha-628553
	I1007 12:24:04.529785  407433 main.go:141] libmachine: (ha-628553-m04) DBG | I1007 12:24:04.529715  409719 retry.go:31] will retry after 4.066059297s: waiting for machine to come up
	I1007 12:24:08.600875  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.601416  407433 main.go:141] libmachine: (ha-628553-m04) Found IP for machine: 192.168.39.119
	I1007 12:24:08.601443  407433 main.go:141] libmachine: (ha-628553-m04) Reserving static IP address...
	I1007 12:24:08.601457  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has current primary IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.601784  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "ha-628553-m04", mac: "52:54:00:be:c5:aa", ip: "192.168.39.119"} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.601805  407433 main.go:141] libmachine: (ha-628553-m04) DBG | skip adding static IP to network mk-ha-628553 - found existing host DHCP lease matching {name: "ha-628553-m04", mac: "52:54:00:be:c5:aa", ip: "192.168.39.119"}
	I1007 12:24:08.601822  407433 main.go:141] libmachine: (ha-628553-m04) Reserved static IP address: 192.168.39.119
	I1007 12:24:08.601831  407433 main.go:141] libmachine: (ha-628553-m04) Waiting for SSH to be available...
	I1007 12:24:08.601839  407433 main.go:141] libmachine: (ha-628553-m04) DBG | Getting to WaitForSSH function...
	I1007 12:24:08.604097  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.604455  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.604490  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.604617  407433 main.go:141] libmachine: (ha-628553-m04) DBG | Using SSH client type: external
	I1007 12:24:08.604677  407433 main.go:141] libmachine: (ha-628553-m04) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa (-rw-------)
	I1007 12:24:08.604709  407433 main.go:141] libmachine: (ha-628553-m04) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:24:08.604742  407433 main.go:141] libmachine: (ha-628553-m04) DBG | About to run SSH command:
	I1007 12:24:08.604755  407433 main.go:141] libmachine: (ha-628553-m04) DBG | exit 0
	I1007 12:24:08.735165  407433 main.go:141] libmachine: (ha-628553-m04) DBG | SSH cmd err, output: <nil>: 
	I1007 12:24:08.735522  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetConfigRaw
	I1007 12:24:08.736240  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetIP
	I1007 12:24:08.738754  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.739240  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.739275  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.739554  407433 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/config.json ...
	I1007 12:24:08.739795  407433 machine.go:93] provisionDockerMachine start ...
	I1007 12:24:08.739817  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:08.740027  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:08.742193  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.742545  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.742591  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.742720  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:08.742919  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.743124  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.743284  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:08.743457  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:24:08.743708  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 12:24:08.743724  407433 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:24:08.859645  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:24:08.859677  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetMachineName
	I1007 12:24:08.859942  407433 buildroot.go:166] provisioning hostname "ha-628553-m04"
	I1007 12:24:08.859983  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetMachineName
	I1007 12:24:08.860195  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:08.862887  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.863255  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.863299  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.863433  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:08.863605  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.863763  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.863862  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:08.864017  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:24:08.864194  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 12:24:08.864210  407433 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-628553-m04 && echo "ha-628553-m04" | sudo tee /etc/hostname
	I1007 12:24:08.995163  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-628553-m04
	
	I1007 12:24:08.995198  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:08.998357  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.998766  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:08.998795  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:08.999025  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:08.999243  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.999431  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:08.999596  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:08.999802  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:24:09.000029  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 12:24:09.000051  407433 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-628553-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-628553-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-628553-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:24:09.124992  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:24:09.125028  407433 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:24:09.125052  407433 buildroot.go:174] setting up certificates
	I1007 12:24:09.125065  407433 provision.go:84] configureAuth start
	I1007 12:24:09.125074  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetMachineName
	I1007 12:24:09.125469  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetIP
	I1007 12:24:09.128005  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.128375  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.128408  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.128554  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.130978  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.131391  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.131423  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.131729  407433 provision.go:143] copyHostCerts
	I1007 12:24:09.131771  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:24:09.131814  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:24:09.131827  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:24:09.131912  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:24:09.132028  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:24:09.132059  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:24:09.132066  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:24:09.132109  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:24:09.132181  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:24:09.132210  407433 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:24:09.132215  407433 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:24:09.132249  407433 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:24:09.132336  407433 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.ha-628553-m04 san=[127.0.0.1 192.168.39.119 ha-628553-m04 localhost minikube]
	I1007 12:24:09.195630  407433 provision.go:177] copyRemoteCerts
	I1007 12:24:09.195723  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:24:09.195760  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.199172  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.199536  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.199565  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.199754  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:09.199952  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.200120  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:09.200284  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:24:09.293649  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:24:09.293723  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:24:09.323884  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:24:09.323974  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:24:09.352261  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:24:09.352355  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:24:09.379011  407433 provision.go:87] duration metric: took 253.929279ms to configureAuth
	I1007 12:24:09.379083  407433 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:24:09.379380  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:24:09.379482  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.382453  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.382893  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.382923  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.383117  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:09.383360  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.383596  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.383820  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:09.383993  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:24:09.384244  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 12:24:09.384260  407433 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:24:09.632687  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:24:09.632714  407433 machine.go:96] duration metric: took 892.90566ms to provisionDockerMachine
	I1007 12:24:09.632727  407433 start.go:293] postStartSetup for "ha-628553-m04" (driver="kvm2")
	I1007 12:24:09.632738  407433 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:24:09.632759  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:09.633108  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:24:09.633151  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.636346  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.636754  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.636792  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.637016  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:09.637214  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.637375  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:09.637486  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:24:09.727849  407433 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:24:09.732599  407433 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:24:09.732635  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:24:09.732727  407433 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:24:09.732823  407433 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:24:09.732837  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:24:09.732954  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:24:09.743228  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:24:09.769603  407433 start.go:296] duration metric: took 136.841708ms for postStartSetup
	I1007 12:24:09.769664  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:09.770065  407433 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1007 12:24:09.770109  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.772848  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.773402  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.773447  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.773610  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:09.773816  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.774011  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:09.774210  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:24:09.866866  407433 machine.go:197] restoring vm config from /var/lib/minikube/backup: [etc]
	I1007 12:24:09.866952  407433 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I1007 12:24:09.926582  407433 fix.go:56] duration metric: took 20.712259155s for fixHost
	I1007 12:24:09.926637  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:09.929943  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.930427  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:09.930457  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:09.930779  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:09.931041  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.931239  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:09.931404  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:09.931583  407433 main.go:141] libmachine: Using SSH client type: native
	I1007 12:24:09.931821  407433 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.119 22 <nil> <nil>}
	I1007 12:24:09.931839  407433 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:24:10.052238  407433 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728303850.026883254
	
	I1007 12:24:10.052264  407433 fix.go:216] guest clock: 1728303850.026883254
	I1007 12:24:10.052271  407433 fix.go:229] Guest: 2024-10-07 12:24:10.026883254 +0000 UTC Remote: 2024-10-07 12:24:09.926613197 +0000 UTC m=+463.844049172 (delta=100.270057ms)
	I1007 12:24:10.052289  407433 fix.go:200] guest clock delta is within tolerance: 100.270057ms
	I1007 12:24:10.052294  407433 start.go:83] releasing machines lock for "ha-628553-m04", held for 20.837998474s
	I1007 12:24:10.052314  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:10.052639  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetIP
	I1007 12:24:10.055673  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.056063  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:10.056109  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.058302  407433 out.go:177] * Found network options:
	I1007 12:24:10.060025  407433 out.go:177]   - NO_PROXY=192.168.39.110,192.168.39.169,192.168.39.149
	W1007 12:24:10.061387  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:24:10.061420  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:24:10.061432  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:24:10.061458  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:10.062052  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:10.062220  407433 main.go:141] libmachine: (ha-628553-m04) Calling .DriverName
	I1007 12:24:10.062317  407433 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:24:10.062357  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	W1007 12:24:10.062471  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:24:10.062498  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	W1007 12:24:10.062511  407433 proxy.go:119] fail to check proxy env: Error ip not in block
	I1007 12:24:10.062599  407433 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:24:10.062623  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHHostname
	I1007 12:24:10.065003  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.065178  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.065378  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:10.065403  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.065574  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:10.065589  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:10.065629  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:10.065766  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHPort
	I1007 12:24:10.065776  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:10.065941  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHKeyPath
	I1007 12:24:10.065946  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:10.066052  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetSSHUsername
	I1007 12:24:10.066122  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:24:10.066197  407433 sshutil.go:53] new ssh client: &{IP:192.168.39.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/ha-628553-m04/id_rsa Username:docker}
	I1007 12:24:10.295113  407433 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:24:10.303392  407433 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:24:10.303485  407433 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:24:10.322649  407433 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:24:10.322683  407433 start.go:495] detecting cgroup driver to use...
	I1007 12:24:10.322757  407433 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:24:10.344603  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:24:10.361918  407433 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:24:10.361994  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:24:10.378103  407433 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:24:10.395313  407433 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:24:10.539031  407433 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:24:10.698607  407433 docker.go:233] disabling docker service ...
	I1007 12:24:10.698680  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:24:10.714061  407433 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:24:10.732030  407433 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:24:10.889095  407433 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:24:11.018542  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:24:11.033237  407433 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:24:11.055141  407433 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:24:11.055262  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.067312  407433 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:24:11.067393  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.079866  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.092168  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.104042  407433 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:24:11.117127  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.130033  407433 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.149837  407433 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:24:11.161801  407433 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:24:11.171884  407433 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:24:11.171961  407433 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:24:11.186081  407433 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:24:11.198005  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:24:11.320021  407433 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:24:11.419036  407433 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:24:11.419128  407433 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:24:11.424768  407433 start.go:563] Will wait 60s for crictl version
	I1007 12:24:11.424850  407433 ssh_runner.go:195] Run: which crictl
	I1007 12:24:11.429617  407433 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:24:11.477303  407433 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:24:11.477390  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:24:11.509335  407433 ssh_runner.go:195] Run: crio --version
	I1007 12:24:11.543903  407433 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:24:11.545292  407433 out.go:177]   - env NO_PROXY=192.168.39.110
	I1007 12:24:11.546729  407433 out.go:177]   - env NO_PROXY=192.168.39.110,192.168.39.169
	I1007 12:24:11.548180  407433 out.go:177]   - env NO_PROXY=192.168.39.110,192.168.39.169,192.168.39.149
	I1007 12:24:11.549562  407433 main.go:141] libmachine: (ha-628553-m04) Calling .GetIP
	I1007 12:24:11.552864  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:11.553327  407433 main.go:141] libmachine: (ha-628553-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:c5:aa", ip: ""} in network mk-ha-628553: {Iface:virbr1 ExpiryTime:2024-10-07 13:24:00 +0000 UTC Type:0 Mac:52:54:00:be:c5:aa Iaid: IPaddr:192.168.39.119 Prefix:24 Hostname:ha-628553-m04 Clientid:01:52:54:00:be:c5:aa}
	I1007 12:24:11.553360  407433 main.go:141] libmachine: (ha-628553-m04) DBG | domain ha-628553-m04 has defined IP address 192.168.39.119 and MAC address 52:54:00:be:c5:aa in network mk-ha-628553
	I1007 12:24:11.553659  407433 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:24:11.558394  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:24:11.573119  407433 mustload.go:65] Loading cluster: ha-628553
	I1007 12:24:11.573407  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:24:11.573795  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:24:11.573848  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:24:11.590317  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43375
	I1007 12:24:11.590869  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:24:11.591440  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:24:11.591464  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:24:11.591796  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:24:11.591994  407433 main.go:141] libmachine: (ha-628553) Calling .GetState
	I1007 12:24:11.593783  407433 host.go:66] Checking if "ha-628553" exists ...
	I1007 12:24:11.594165  407433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:24:11.594216  407433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:24:11.610436  407433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I1007 12:24:11.610984  407433 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:24:11.611543  407433 main.go:141] libmachine: Using API Version  1
	I1007 12:24:11.611566  407433 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:24:11.612084  407433 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:24:11.612283  407433 main.go:141] libmachine: (ha-628553) Calling .DriverName
	I1007 12:24:11.612454  407433 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553 for IP: 192.168.39.119
	I1007 12:24:11.612468  407433 certs.go:194] generating shared ca certs ...
	I1007 12:24:11.612487  407433 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:24:11.612655  407433 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:24:11.612707  407433 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:24:11.612726  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:24:11.612746  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:24:11.612762  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:24:11.612778  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:24:11.612849  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:24:11.612891  407433 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:24:11.612907  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:24:11.612938  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:24:11.612970  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:24:11.613001  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:24:11.613050  407433 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:24:11.613088  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:24:11.613107  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:24:11.613124  407433 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:24:11.613152  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:24:11.644899  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:24:11.672793  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:24:11.699075  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:24:11.728119  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:24:11.755027  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:24:11.781899  407433 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:24:11.809176  407433 ssh_runner.go:195] Run: openssl version
	I1007 12:24:11.815973  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:24:11.828522  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:24:11.833206  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:24:11.833281  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:24:11.839689  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:24:11.850931  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:24:11.862646  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:24:11.867557  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:24:11.867622  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:24:11.873559  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:24:11.886128  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:24:11.898496  407433 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:24:11.903740  407433 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:24:11.903830  407433 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:24:11.910375  407433 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:24:11.923085  407433 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:24:11.927900  407433 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:24:11.927956  407433 kubeadm.go:934] updating node {m04 192.168.39.119 0 v1.31.1  false true} ...
	I1007 12:24:11.928056  407433 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-628553-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-628553 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:24:11.928132  407433 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:24:11.939738  407433 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:24:11.939830  407433 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1007 12:24:11.951094  407433 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1007 12:24:11.970139  407433 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:24:11.989618  407433 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1007 12:24:11.994178  407433 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:24:12.008011  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:24:12.131341  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:24:12.151246  407433 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.39.119 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I1007 12:24:12.151624  407433 config.go:182] Loaded profile config "ha-628553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:24:12.154458  407433 out.go:177] * Verifying Kubernetes components...
	I1007 12:24:12.156015  407433 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:24:12.347894  407433 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:24:12.373838  407433 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:24:12.374206  407433 kapi.go:59] client config for ha-628553: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/ha-628553/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1007 12:24:12.374306  407433 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.110:8443
	I1007 12:24:12.374617  407433 node_ready.go:35] waiting up to 6m0s for node "ha-628553-m04" to be "Ready" ...
	I1007 12:24:12.374742  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:12.374755  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.374772  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.374783  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.378952  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:12.874926  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:12.874951  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.874978  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.874984  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.878534  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:12.879022  407433 node_ready.go:49] node "ha-628553-m04" has status "Ready":"True"
	I1007 12:24:12.879045  407433 node_ready.go:38] duration metric: took 504.401986ms for node "ha-628553-m04" to be "Ready" ...
	I1007 12:24:12.879059  407433 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:24:12.879143  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods
	I1007 12:24:12.879154  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.879166  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.879174  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.884847  407433 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1007 12:24:12.893298  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.893432  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ktmzq
	I1007 12:24:12.893447  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.893458  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.893465  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.897638  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:12.898370  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:12.898388  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.898396  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.898400  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.901304  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.901875  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:12.901901  407433 pod_ready.go:82] duration metric: took 8.568632ms for pod "coredns-7c65d6cfc9-ktmzq" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.901917  407433 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.902001  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-rsr6v
	I1007 12:24:12.902009  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.902017  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.902024  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.905015  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.905856  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:12.905879  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.905887  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.905890  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.908998  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:12.909611  407433 pod_ready.go:93] pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:12.909632  407433 pod_ready.go:82] duration metric: took 7.704219ms for pod "coredns-7c65d6cfc9-rsr6v" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.909643  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.909711  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553
	I1007 12:24:12.909719  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.909727  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.909733  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.912570  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.913034  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:12.913047  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.913055  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.913060  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.915920  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.916595  407433 pod_ready.go:93] pod "etcd-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:12.916619  407433 pod_ready.go:82] duration metric: took 6.969737ms for pod "etcd-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.916631  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.916698  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m02
	I1007 12:24:12.916708  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.916716  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.916720  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.919049  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.919698  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:12.919716  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:12.919727  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:12.919732  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:12.922473  407433 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1007 12:24:12.922974  407433 pod_ready.go:93] pod "etcd-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:12.922997  407433 pod_ready.go:82] duration metric: took 6.358628ms for pod "etcd-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:12.923011  407433 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:13.075490  407433 request.go:632] Waited for 152.391076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:24:13.075561  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553-m03
	I1007 12:24:13.075567  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:13.075575  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:13.075580  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:13.079745  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:13.275957  407433 request.go:632] Waited for 195.439243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:13.276022  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:13.276029  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:13.276038  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:13.276044  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:13.280027  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:13.280708  407433 pod_ready.go:93] pod "etcd-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:13.280727  407433 pod_ready.go:82] duration metric: took 357.709145ms for pod "etcd-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:13.280747  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:13.475839  407433 request.go:632] Waited for 195.001393ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:24:13.475898  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553
	I1007 12:24:13.475904  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:13.475912  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:13.475922  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:13.479095  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:13.675375  407433 request.go:632] Waited for 195.417553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:13.675447  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:13.675453  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:13.675462  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:13.675469  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:13.679265  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:13.679878  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:13.679901  407433 pod_ready.go:82] duration metric: took 399.147153ms for pod "kube-apiserver-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:13.679911  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:13.875768  407433 request.go:632] Waited for 195.749757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:24:13.875851  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m02
	I1007 12:24:13.875863  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:13.875878  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:13.875887  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:13.879948  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:14.075272  407433 request.go:632] Waited for 194.404378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:14.075382  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:14.075394  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:14.075409  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:14.075420  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:14.079859  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:14.080336  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:14.080360  407433 pod_ready.go:82] duration metric: took 400.441209ms for pod "kube-apiserver-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:14.080373  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:14.275394  407433 request.go:632] Waited for 194.922319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:24:14.275475  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553-m03
	I1007 12:24:14.275484  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:14.275496  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:14.275508  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:14.279992  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:14.475315  407433 request.go:632] Waited for 194.387646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:14.475396  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:14.475405  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:14.475441  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:14.475451  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:14.483945  407433 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1007 12:24:14.484501  407433 pod_ready.go:93] pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:14.484526  407433 pod_ready.go:82] duration metric: took 404.144521ms for pod "kube-apiserver-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:14.484544  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:14.675489  407433 request.go:632] Waited for 190.836423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:24:14.675562  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553
	I1007 12:24:14.675569  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:14.675577  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:14.675583  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:14.679839  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:14.875299  407433 request.go:632] Waited for 194.392469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:14.875398  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:14.875410  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:14.875424  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:14.875432  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:14.879672  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:14.880203  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:14.880226  407433 pod_ready.go:82] duration metric: took 395.672855ms for pod "kube-controller-manager-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:14.880241  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:15.075526  407433 request.go:632] Waited for 195.189096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:24:15.075642  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m02
	I1007 12:24:15.075657  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:15.075670  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:15.075680  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:15.079700  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:15.275039  407433 request.go:632] Waited for 194.41743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:15.275106  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:15.275111  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:15.275120  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:15.275124  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:15.279593  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:15.280146  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:15.280172  407433 pod_ready.go:82] duration metric: took 399.921739ms for pod "kube-controller-manager-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:15.280187  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:15.475240  407433 request.go:632] Waited for 194.951573ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:24:15.475322  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-628553-m03
	I1007 12:24:15.475331  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:15.475344  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:15.475352  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:15.479019  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:15.675276  407433 request.go:632] Waited for 195.277446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:15.675361  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:15.675369  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:15.675384  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:15.675394  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:15.678988  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:15.679750  407433 pod_ready.go:93] pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:15.679773  407433 pod_ready.go:82] duration metric: took 399.578882ms for pod "kube-controller-manager-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:15.679786  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:15.875879  407433 request.go:632] Waited for 196.016204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:24:15.875965  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-956k4
	I1007 12:24:15.875971  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:15.875977  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:15.875984  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:15.879439  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:16.075738  407433 request.go:632] Waited for 195.358684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:16.075835  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:16.075843  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:16.075854  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:16.075915  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:16.080069  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:16.080869  407433 pod_ready.go:93] pod "kube-proxy-956k4" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:16.080901  407433 pod_ready.go:82] duration metric: took 401.107884ms for pod "kube-proxy-956k4" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:16.080917  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:16.275390  407433 request.go:632] Waited for 194.353063ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:24:16.275486  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:24:16.275495  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:16.275506  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:16.275514  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:16.280026  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:16.475130  407433 request.go:632] Waited for 194.263013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:16.475202  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:16.475208  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:16.475220  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:16.475230  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:16.479156  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:16.674953  407433 request.go:632] Waited for 93.299818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:24:16.675053  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:24:16.675059  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:16.675067  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:16.675073  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:16.679150  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:16.875324  407433 request.go:632] Waited for 195.43361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:16.875417  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:16.875422  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:16.875431  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:16.875439  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:16.878887  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:17.081707  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fkzqr
	I1007 12:24:17.081732  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:17.081740  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:17.081744  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:17.085874  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:17.275068  407433 request.go:632] Waited for 188.303218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:17.275143  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m04
	I1007 12:24:17.275149  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:17.275159  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:17.275169  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:17.279302  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:17.280427  407433 pod_ready.go:93] pod "kube-proxy-fkzqr" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:17.280456  407433 pod_ready.go:82] duration metric: took 1.199530131s for pod "kube-proxy-fkzqr" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:17.280471  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:17.475955  407433 request.go:632] Waited for 195.373968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:24:17.476036  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6vg8
	I1007 12:24:17.476042  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:17.476050  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:17.476054  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:17.480604  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:17.675952  407433 request.go:632] Waited for 194.407397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:17.676034  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:17.676046  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:17.676055  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:17.676066  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:17.679557  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:17.680215  407433 pod_ready.go:93] pod "kube-proxy-h6vg8" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:17.680249  407433 pod_ready.go:82] duration metric: took 399.768958ms for pod "kube-proxy-h6vg8" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:17.680264  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:17.875340  407433 request.go:632] Waited for 194.957231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:24:17.875449  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5c6d
	I1007 12:24:17.875462  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:17.875474  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:17.875484  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:17.880414  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:18.075604  407433 request.go:632] Waited for 194.415341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:18.075685  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:18.075695  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:18.075706  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:18.075745  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:18.080238  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:18.081052  407433 pod_ready.go:93] pod "kube-proxy-s5c6d" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:18.081084  407433 pod_ready.go:82] duration metric: took 400.80865ms for pod "kube-proxy-s5c6d" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:18.081120  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:18.275983  407433 request.go:632] Waited for 194.754645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:24:18.276047  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553
	I1007 12:24:18.276052  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:18.276060  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:18.276075  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:18.280458  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:18.475558  407433 request.go:632] Waited for 194.403216ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:18.475636  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553
	I1007 12:24:18.475644  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:18.475655  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:18.475662  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:18.479831  407433 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1007 12:24:18.480490  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:18.480514  407433 pod_ready.go:82] duration metric: took 399.379545ms for pod "kube-scheduler-ha-628553" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:18.480527  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:18.675666  407433 request.go:632] Waited for 195.040798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:24:18.675726  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m02
	I1007 12:24:18.675732  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:18.675740  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:18.675745  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:18.765318  407433 round_trippers.go:574] Response Status: 200 OK in 89 milliseconds
	I1007 12:24:18.875428  407433 request.go:632] Waited for 109.26966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:18.875492  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m02
	I1007 12:24:18.875499  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:18.875511  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:18.875518  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:18.879039  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:18.880174  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:18.880197  407433 pod_ready.go:82] duration metric: took 399.66167ms for pod "kube-scheduler-ha-628553-m02" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:18.880208  407433 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:19.075186  407433 request.go:632] Waited for 194.895451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:24:19.075252  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-628553-m03
	I1007 12:24:19.075258  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:19.075265  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:19.075269  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:19.078730  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:19.275491  407433 request.go:632] Waited for 195.988081ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:19.275568  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes/ha-628553-m03
	I1007 12:24:19.275575  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:19.275588  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:19.275600  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:19.279147  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:19.279845  407433 pod_ready.go:93] pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace has status "Ready":"True"
	I1007 12:24:19.279865  407433 pod_ready.go:82] duration metric: took 399.650057ms for pod "kube-scheduler-ha-628553-m03" in "kube-system" namespace to be "Ready" ...
	I1007 12:24:19.279877  407433 pod_ready.go:39] duration metric: took 6.400801701s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:24:19.279894  407433 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:24:19.279956  407433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:24:19.295966  407433 system_svc.go:56] duration metric: took 16.062104ms WaitForService to wait for kubelet
	I1007 12:24:19.295995  407433 kubeadm.go:582] duration metric: took 7.144696991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:24:19.296018  407433 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:24:19.475506  407433 request.go:632] Waited for 179.37132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.110:8443/api/v1/nodes
	I1007 12:24:19.475571  407433 round_trippers.go:463] GET https://192.168.39.110:8443/api/v1/nodes
	I1007 12:24:19.475579  407433 round_trippers.go:469] Request Headers:
	I1007 12:24:19.475594  407433 round_trippers.go:473]     Accept: application/json, */*
	I1007 12:24:19.475604  407433 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1007 12:24:19.479543  407433 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1007 12:24:19.481199  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:24:19.481225  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:24:19.481236  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:24:19.481239  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:24:19.481243  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:24:19.481246  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:24:19.481250  407433 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:24:19.481253  407433 node_conditions.go:123] node cpu capacity is 2
	I1007 12:24:19.481258  407433 node_conditions.go:105] duration metric: took 185.233923ms to run NodePressure ...
	I1007 12:24:19.481270  407433 start.go:241] waiting for startup goroutines ...
	I1007 12:24:19.481290  407433 start.go:255] writing updated cluster config ...
	I1007 12:24:19.481593  407433 ssh_runner.go:195] Run: rm -f paused
	I1007 12:24:19.533793  407433 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:24:19.536976  407433 out.go:177] * Done! kubectl is now configured to use "ha-628553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.550216756Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304049550193272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d0a9a12-cfbd-4133-adbf-c2cb8e6fcd4e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.550809656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc621246-62f4-4c1b-9708-b8d40c68ee67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.550867286Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc621246-62f4-4c1b-9708-b8d40c68ee67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.552917636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:606bb92353e8608947ca6c5edaaeb447dd03f364d07574451a62d7ddd1de7b44,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728303984815408753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b31aa2090e1c676b6f1af874d4c4b96236cfe66a1296bdfbe6e0b7de09fd6eb,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728303926315093520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da10c5d0bffba94bdabbdd93bd3adb14827488f1138fa21c42dc110c2794f1a8,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728303901319366674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da4bac7b9a1c61601b94980b04cbc1084bcf04c9c729210b0b371dab75df63d,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728303890177479234,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b121c4ce3f1c89617c1956de3bda850aa7df80ece2e55818c025ed03056dd739,PodSandboxId:8f3f66727ce1365593b28e63e649be987881153fc1dee7f4411e615f3062bd88,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303762625936143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b22cd52cf94f58f06a2f709e80ba61098a2e7fdfb76f690099d678912f9b19,PodSandboxId:aa6d4b081f68e17f4b8e697261e0c1b77db8c50c86e50eb3920acfea96816c1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1728303761407740922,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:baa8694f118c36b612561775e109d541e2a915f312d37d6e3be467a057106e52,PodSandboxId:ae1ce8ae39c0a3f0ebc9445e9a6282a68d71e6ea1d5b420140ef0d735cb39c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761279049523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-5407-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ec3666316198103acd7321ee499a1c56d87b82aa2872b2c455d2d56d79c00,PodSandboxId:80740c542dfa153e08d5e004001447f8625d19fa1c0d59dd5698a49011d35871,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761188755535,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b7814517854b690eef4baa06ba056aa5af0f6fad15d83fb160d2962677836,PodSandboxId:e9a17fae1d59f0a1f4cd6cedb57e8596bccee204e61dd7798da94c478152a769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728303761014117856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f3bec737bfa85e116bde94fa78421a9c6ed6155b02d6687349eded1294e2c,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728303746495412294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484445a153ab89a201903c72ad1e56ff571951cfce2a89c1851318ea9522b4b1,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_EXITED,CreatedAt:1728303718484189839,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-v
ip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a6976c1286a07048e803e2a844dc480948730521d22549e3eb0f742fbccc91,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728303716062185224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-
manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcf62f683c219b741bda686149685e6169c8c1cbaa701e6ed54e473f53abac,PodSandboxId:5994bdf27bafee82146b6f8274d09d5805aaae55232bc8d15e6119990968d7c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728303715994464544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fb6227eb362f9d9b97a269451c541ab7c49c72f67128bee5659d44d441d54d,PodSandboxId:7ed281d4e14b42ab7ffe767ed7ee9b3ac644cab85e2ed4ecd79c789353c64949,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728303715902605465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc621246-62f4-4c1b-9708-b8d40c68ee67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.603230595Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0b39062a-8599-4fec-8d4f-130d20def870 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.603310182Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b39062a-8599-4fec-8d4f-130d20def870 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.605199808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99dc493b-b694-44b9-a981-d58cff22c01b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.605735940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304049605707570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99dc493b-b694-44b9-a981-d58cff22c01b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.606291506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=837498f3-d86a-49d7-b34a-27fb7be7d999 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.606376095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=837498f3-d86a-49d7-b34a-27fb7be7d999 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.606782695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:606bb92353e8608947ca6c5edaaeb447dd03f364d07574451a62d7ddd1de7b44,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728303984815408753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b31aa2090e1c676b6f1af874d4c4b96236cfe66a1296bdfbe6e0b7de09fd6eb,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728303926315093520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da10c5d0bffba94bdabbdd93bd3adb14827488f1138fa21c42dc110c2794f1a8,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728303901319366674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da4bac7b9a1c61601b94980b04cbc1084bcf04c9c729210b0b371dab75df63d,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728303890177479234,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b121c4ce3f1c89617c1956de3bda850aa7df80ece2e55818c025ed03056dd739,PodSandboxId:8f3f66727ce1365593b28e63e649be987881153fc1dee7f4411e615f3062bd88,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303762625936143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b22cd52cf94f58f06a2f709e80ba61098a2e7fdfb76f690099d678912f9b19,PodSandboxId:aa6d4b081f68e17f4b8e697261e0c1b77db8c50c86e50eb3920acfea96816c1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1728303761407740922,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:baa8694f118c36b612561775e109d541e2a915f312d37d6e3be467a057106e52,PodSandboxId:ae1ce8ae39c0a3f0ebc9445e9a6282a68d71e6ea1d5b420140ef0d735cb39c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761279049523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-5407-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ec3666316198103acd7321ee499a1c56d87b82aa2872b2c455d2d56d79c00,PodSandboxId:80740c542dfa153e08d5e004001447f8625d19fa1c0d59dd5698a49011d35871,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761188755535,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b7814517854b690eef4baa06ba056aa5af0f6fad15d83fb160d2962677836,PodSandboxId:e9a17fae1d59f0a1f4cd6cedb57e8596bccee204e61dd7798da94c478152a769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728303761014117856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f3bec737bfa85e116bde94fa78421a9c6ed6155b02d6687349eded1294e2c,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728303746495412294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484445a153ab89a201903c72ad1e56ff571951cfce2a89c1851318ea9522b4b1,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_EXITED,CreatedAt:1728303718484189839,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-v
ip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a6976c1286a07048e803e2a844dc480948730521d22549e3eb0f742fbccc91,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728303716062185224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-
manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcf62f683c219b741bda686149685e6169c8c1cbaa701e6ed54e473f53abac,PodSandboxId:5994bdf27bafee82146b6f8274d09d5805aaae55232bc8d15e6119990968d7c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728303715994464544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fb6227eb362f9d9b97a269451c541ab7c49c72f67128bee5659d44d441d54d,PodSandboxId:7ed281d4e14b42ab7ffe767ed7ee9b3ac644cab85e2ed4ecd79c789353c64949,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728303715902605465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=837498f3-d86a-49d7-b34a-27fb7be7d999 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.648367622Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce2f5c5a-1a8e-4e8d-adfe-a2f2d75b0881 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.648485838Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce2f5c5a-1a8e-4e8d-adfe-a2f2d75b0881 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.649621421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=094a3a80-edb4-4ebd-8c5e-1de77da1ba49 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.650164540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304049650141496,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=094a3a80-edb4-4ebd-8c5e-1de77da1ba49 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.650920733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fa72ad2-9e1a-4e7d-804d-1bdc9483ba4f name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.650993544Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fa72ad2-9e1a-4e7d-804d-1bdc9483ba4f name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.651275144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:606bb92353e8608947ca6c5edaaeb447dd03f364d07574451a62d7ddd1de7b44,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728303984815408753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b31aa2090e1c676b6f1af874d4c4b96236cfe66a1296bdfbe6e0b7de09fd6eb,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728303926315093520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da10c5d0bffba94bdabbdd93bd3adb14827488f1138fa21c42dc110c2794f1a8,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728303901319366674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da4bac7b9a1c61601b94980b04cbc1084bcf04c9c729210b0b371dab75df63d,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728303890177479234,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b121c4ce3f1c89617c1956de3bda850aa7df80ece2e55818c025ed03056dd739,PodSandboxId:8f3f66727ce1365593b28e63e649be987881153fc1dee7f4411e615f3062bd88,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303762625936143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b22cd52cf94f58f06a2f709e80ba61098a2e7fdfb76f690099d678912f9b19,PodSandboxId:aa6d4b081f68e17f4b8e697261e0c1b77db8c50c86e50eb3920acfea96816c1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1728303761407740922,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:baa8694f118c36b612561775e109d541e2a915f312d37d6e3be467a057106e52,PodSandboxId:ae1ce8ae39c0a3f0ebc9445e9a6282a68d71e6ea1d5b420140ef0d735cb39c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761279049523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-5407-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ec3666316198103acd7321ee499a1c56d87b82aa2872b2c455d2d56d79c00,PodSandboxId:80740c542dfa153e08d5e004001447f8625d19fa1c0d59dd5698a49011d35871,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761188755535,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b7814517854b690eef4baa06ba056aa5af0f6fad15d83fb160d2962677836,PodSandboxId:e9a17fae1d59f0a1f4cd6cedb57e8596bccee204e61dd7798da94c478152a769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728303761014117856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f3bec737bfa85e116bde94fa78421a9c6ed6155b02d6687349eded1294e2c,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728303746495412294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484445a153ab89a201903c72ad1e56ff571951cfce2a89c1851318ea9522b4b1,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_EXITED,CreatedAt:1728303718484189839,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-v
ip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a6976c1286a07048e803e2a844dc480948730521d22549e3eb0f742fbccc91,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728303716062185224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-
manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcf62f683c219b741bda686149685e6169c8c1cbaa701e6ed54e473f53abac,PodSandboxId:5994bdf27bafee82146b6f8274d09d5805aaae55232bc8d15e6119990968d7c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728303715994464544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fb6227eb362f9d9b97a269451c541ab7c49c72f67128bee5659d44d441d54d,PodSandboxId:7ed281d4e14b42ab7ffe767ed7ee9b3ac644cab85e2ed4ecd79c789353c64949,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728303715902605465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fa72ad2-9e1a-4e7d-804d-1bdc9483ba4f name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.693141574Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=813ab3ea-ae0f-4843-8174-34435680b485 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.693239300Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=813ab3ea-ae0f-4843-8174-34435680b485 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.702812684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc071f6c-5edc-4a41-b7bf-e37badbde5de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.703347520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304049703245529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc071f6c-5edc-4a41-b7bf-e37badbde5de name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.704217568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9687b103-fdbb-419b-9424-b13da51f8adc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.704296966Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9687b103-fdbb-419b-9424-b13da51f8adc name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:27:29 ha-628553 crio[948]: time="2024-10-07 12:27:29.704789027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:606bb92353e8608947ca6c5edaaeb447dd03f364d07574451a62d7ddd1de7b44,PodSandboxId:ba58486de78c0ae0b46713004a8fbe629ede70949de8d85c52d9dbae2281e392,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728303984815408753,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9e39492eb2c4bce38dd565366b0984,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b31aa2090e1c676b6f1af874d4c4b96236cfe66a1296bdfbe6e0b7de09fd6eb,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728303926315093520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da10c5d0bffba94bdabbdd93bd3adb14827488f1138fa21c42dc110c2794f1a8,PodSandboxId:17641b07f74e743477e3cede8a3f44cb4a6f962e0b1a1cf09e7b25febff711cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728303901319366674,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da4bac7b9a1c61601b94980b04cbc1084bcf04c9c729210b0b371dab75df63d,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:1728303890177479234,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b121c4ce3f1c89617c1956de3bda850aa7df80ece2e55818c025ed03056dd739,PodSandboxId:8f3f66727ce1365593b28e63e649be987881153fc1dee7f4411e615f3062bd88,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728303762625936143,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-vc5k8,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8b53e3fe-5dba-4b37-b415-380bb77e5fd2,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d3b22cd52cf94f58f06a2f709e80ba61098a2e7fdfb76f690099d678912f9b19,PodSandboxId:aa6d4b081f68e17f4b8e697261e0c1b77db8c50c86e50eb3920acfea96816c1f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1728303761407740922,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-snf5v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6360ec2-8f69-454b-9bfc-d636ebd8b372,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:baa8694f118c36b612561775e109d541e2a915f312d37d6e3be467a057106e52,PodSandboxId:ae1ce8ae39c0a3f0ebc9445e9a6282a68d71e6ea1d5b420140ef0d735cb39c07,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761279049523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ktmzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda6ae24-5407-4f63-9a56-29fa9eba8966,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"p
rotocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:686ec3666316198103acd7321ee499a1c56d87b82aa2872b2c455d2d56d79c00,PodSandboxId:80740c542dfa153e08d5e004001447f8625d19fa1c0d59dd5698a49011d35871,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728303761188755535,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-rsr6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60fd800f-38f1-40d5-9ecf-cbf21bf5add6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernet
es.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e98b7814517854b690eef4baa06ba056aa5af0f6fad15d83fb160d2962677836,PodSandboxId:e9a17fae1d59f0a1f4cd6cedb57e8596bccee204e61dd7798da94c478152a769,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728303761014117856,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-pr
oxy-h6vg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97dd82f4-8e31-4569-b762-fc804d08efb0,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:225f3bec737bfa85e116bde94fa78421a9c6ed6155b02d6687349eded1294e2c,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728303746495412294,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controlle
r-manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484445a153ab89a201903c72ad1e56ff571951cfce2a89c1851318ea9522b4b1,PodSandboxId:b6f9f9b6f13deddec5234b06e4515ce546cc887688ce3adff22c3c16e84eefc9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_EXITED,CreatedAt:1728303718484189839,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-v
ip-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9df0eeae4932743e946b9f74b4181463,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0a6976c1286a07048e803e2a844dc480948730521d22549e3eb0f742fbccc91,PodSandboxId:30929f0f1c9a58c16f32d1c9c5fd09379453b8520a879fb6d4c451b9ed856f11,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728303716062185224,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-
manager-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ddceaa845e9d579fdd80284eb5bd959,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0bcf62f683c219b741bda686149685e6169c8c1cbaa701e6ed54e473f53abac,PodSandboxId:5994bdf27bafee82146b6f8274d09d5805aaae55232bc8d15e6119990968d7c1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728303715994464544,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-628553,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 0cb7efd98e1775704789a8938bb7525f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95fb6227eb362f9d9b97a269451c541ab7c49c72f67128bee5659d44d441d54d,PodSandboxId:7ed281d4e14b42ab7ffe767ed7ee9b3ac644cab85e2ed4ecd79c789353c64949,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728303715902605465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-628553,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: fa78002d344fb10ba4bceb5ed1731c87,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9687b103-fdbb-419b-9424-b13da51f8adc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	606bb92353e86       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Exited              kube-apiserver            3                   ba58486de78c0       kube-apiserver-ha-628553
	9b31aa2090e1c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Running             storage-provisioner       5                   17641b07f74e7       storage-provisioner
	da10c5d0bffba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       4                   17641b07f74e7       storage-provisioner
	6da4bac7b9a1c       18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460                                      2 minutes ago        Running             kube-vip                  1                   b6f9f9b6f13de       kube-vip-ha-628553
	b121c4ce3f1c8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago        Running             busybox                   1                   8f3f66727ce13       busybox-7dff88458-vc5k8
	d3b22cd52cf94       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago        Running             kindnet-cni               1                   aa6d4b081f68e       kindnet-snf5v
	baa8694f118c3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago        Running             coredns                   1                   ae1ce8ae39c0a       coredns-7c65d6cfc9-ktmzq
	686ec36663161       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago        Running             coredns                   1                   80740c542dfa1       coredns-7c65d6cfc9-rsr6v
	e98b781451785       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago        Running             kube-proxy                1                   e9a17fae1d59f       kube-proxy-h6vg8
	225f3bec737bf       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago        Running             kube-controller-manager   2                   30929f0f1c9a5       kube-controller-manager-ha-628553
	484445a153ab8       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     5 minutes ago        Exited              kube-vip                  0                   b6f9f9b6f13de       kube-vip-ha-628553
	f0a6976c1286a       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago        Exited              kube-controller-manager   1                   30929f0f1c9a5       kube-controller-manager-ha-628553
	f0bcf62f683c2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago        Running             etcd                      1                   5994bdf27bafe       etcd-ha-628553
	95fb6227eb362       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago        Running             kube-scheduler            1                   7ed281d4e14b4       kube-scheduler-ha-628553
	
	
	==> coredns [686ec3666316198103acd7321ee499a1c56d87b82aa2872b2c455d2d56d79c00] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[280398106]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.760) (total time: 30005ms):
	Trace[280398106]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (12:23:11.765)
	Trace[280398106]: [30.00544298s] [30.00544298s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[608973468]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.764) (total time: 30001ms):
	Trace[608973468]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (12:23:11.766)
	Trace[608973468]: [30.001242765s] [30.001242765s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[658286387]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.757) (total time: 30008ms):
	Trace[658286387]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30008ms (12:23:11.766)
	Trace[658286387]: [30.008700424s] [30.008700424s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2349": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2349": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [baa8694f118c36b612561775e109d541e2a915f312d37d6e3be467a057106e52] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1592772175]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.757) (total time: 30005ms):
	Trace[1592772175]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (12:23:11.762)
	Trace[1592772175]: [30.005216016s] [30.005216016s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[381370772]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.761) (total time: 30002ms):
	Trace[381370772]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (12:23:11.764)
	Trace[381370772]: [30.002479951s] [30.002479951s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[770192097]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Oct-2024 12:22:41.762) (total time: 30002ms):
	Trace[770192097]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (12:23:11.764)
	Trace[770192097]: [30.002450665s] [30.002450665s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io)
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2381": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=2381": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2349": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=2349": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 7 12:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051426] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039314] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.872196] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.731407] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.642671] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.201157] systemd-fstab-generator[872]: Ignoring "noauto" option for root device
	[  +0.059612] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060379] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.204754] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.116063] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +0.296542] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +4.144385] systemd-fstab-generator[1043]: Ignoring "noauto" option for root device
	[  +0.347991] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.519719] kauditd_printk_skb: 1 callbacks suppressed
	[Oct 7 12:22] kauditd_printk_skb: 40 callbacks suppressed
	[Oct 7 12:23] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [f0bcf62f683c219b741bda686149685e6169c8c1cbaa701e6ed54e473f53abac] <==
	{"level":"info","ts":"2024-10-07T12:27:26.105307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-07T12:27:26.105380Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-07T12:27:26.105432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2024-10-07T12:27:26.105450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 [logterm: 3, index: 2915] sent MsgPreVote request to c4e3087522f8e2e6 at term 3"}
	{"level":"warn","ts":"2024-10-07T12:27:26.237825Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":11932448183492845811,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-10-07T12:27:26.590639Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4e3087522f8e2e6","rtt":"1.069355ms","error":"dial tcp 192.168.39.169:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-10-07T12:27:26.594082Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c4e3087522f8e2e6","rtt":"9.232224ms","error":"dial tcp 192.168.39.169:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-10-07T12:27:26.738081Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":11932448183492845811,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-10-07T12:27:27.238760Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":11932448183492845811,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-10-07T12:27:27.730959Z","caller":"etcdserver/v3_server.go:932","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2024-10-07T12:27:27.731139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"13.999594022s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2024-10-07T12:27:27.731205Z","caller":"traceutil/trace.go:171","msg":"trace[1628157115] range","detail":"{range_begin:; range_end:; }","duration":"13.99967864s","start":"2024-10-07T12:27:13.731516Z","end":"2024-10-07T12:27:27.731194Z","steps":["trace[1628157115] 'agreement among raft nodes before linearized reading'  (duration: 13.999591256s)"],"step_count":1}
	{"level":"error","ts":"2024-10-07T12:27:27.731278Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-10-07T12:27:27.905282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-07T12:27:27.905348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-07T12:27:27.905364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2024-10-07T12:27:27.905378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 [logterm: 3, index: 2915] sent MsgPreVote request to c4e3087522f8e2e6 at term 3"}
	{"level":"warn","ts":"2024-10-07T12:27:28.232819Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":11932448183492845812,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-10-07T12:27:28.733425Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":11932448183492845812,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-10-07T12:27:29.234574Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":11932448183492845812,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-10-07T12:27:29.705009Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-07T12:27:29.705067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-07T12:27:29.705098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2024-10-07T12:27:29.705112Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 [logterm: 3, index: 2915] sent MsgPreVote request to c4e3087522f8e2e6 at term 3"}
	{"level":"warn","ts":"2024-10-07T12:27:29.735567Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":11932448183492845812,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 12:27:30 up 6 min,  0 users,  load average: 0.28, 0.43, 0.21
	Linux ha-628553 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d3b22cd52cf94f58f06a2f709e80ba61098a2e7fdfb76f690099d678912f9b19] <==
	I1007 12:26:52.854971       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:26:52.854996       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:27:02.856743       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:27:02.856871       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:27:02.857025       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:27:02.857052       1 main.go:299] handling current node
	I1007 12:27:02.857075       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:27:02.857090       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	W1007 12:27:06.743260       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Unauthorized
	I1007 12:27:06.743352       1 trace.go:236] Trace[1105702705]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232 (07-Oct-2024 12:26:53.885) (total time: 12857ms):
	Trace[1105702705]: ---"Objects listed" error:Unauthorized 12857ms (12:27:06.743)
	Trace[1105702705]: [12.857707522s] [12.857707522s] END
	E1007 12:27:06.743398       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	I1007 12:27:12.854542       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:27:12.854639       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	I1007 12:27:12.854847       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:27:12.854876       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:27:12.854930       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:27:12.854948       1 main.go:299] handling current node
	I1007 12:27:22.851877       1 main.go:295] Handling node with IPs: map[192.168.39.119:{}]
	I1007 12:27:22.851926       1 main.go:322] Node ha-628553-m04 has CIDR [10.244.3.0/24] 
	I1007 12:27:22.852148       1 main.go:295] Handling node with IPs: map[192.168.39.110:{}]
	I1007 12:27:22.852156       1 main.go:299] handling current node
	I1007 12:27:22.852167       1 main.go:295] Handling node with IPs: map[192.168.39.169:{}]
	I1007 12:27:22.852171       1 main.go:322] Node ha-628553-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [606bb92353e8608947ca6c5edaaeb447dd03f364d07574451a62d7ddd1de7b44] <==
	E1007 12:27:20.783621       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E1007 12:27:20.783761       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E1007 12:27:20.783859       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E1007 12:27:20.796983       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: rpctypes.EtcdError{code:0xe, desc:\"etcdserver: request timed out\"}: etcdserver: request timed out" logger="UnhandledError"
	E1007 12:27:20.824730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PriorityClass: failed to list *v1.PriorityClass: etcdserver: request timed out" logger="UnhandledError"
	E1007 12:27:20.824750       1 cacher.go:478] cacher (csistoragecapacities.storage.k8s.io): unexpected ListAndWatch error: failed to list *storage.CSIStorageCapacity: etcdserver: request timed out; reinitializing...
	E1007 12:27:20.824759       1 cacher.go:478] cacher (secrets): unexpected ListAndWatch error: failed to list *core.Secret: etcdserver: request timed out; reinitializing...
	W1007 12:27:20.783414       1 reflector.go:561] storage/cacher.go:/pods: failed to list *core.Pod: etcdserver: request timed out
	E1007 12:27:20.825776       1 cacher.go:478] cacher (pods): unexpected ListAndWatch error: failed to list *core.Pod: etcdserver: request timed out; reinitializing...
	W1007 12:27:20.783451       1 reflector.go:561] storage/cacher.go:/prioritylevelconfigurations: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out
	E1007 12:27:20.825788       1 cacher.go:478] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out; reinitializing...
	W1007 12:27:20.783471       1 reflector.go:561] storage/cacher.go:/controllerrevisions: failed to list *apps.ControllerRevision: etcdserver: request timed out
	E1007 12:27:20.825796       1 cacher.go:478] cacher (controllerrevisions.apps): unexpected ListAndWatch error: failed to list *apps.ControllerRevision: etcdserver: request timed out; reinitializing...
	W1007 12:27:20.783491       1 reflector.go:561] storage/cacher.go:/persistentvolumeclaims: failed to list *core.PersistentVolumeClaim: etcdserver: request timed out
	E1007 12:27:20.825806       1 cacher.go:478] cacher (persistentvolumeclaims): unexpected ListAndWatch error: failed to list *core.PersistentVolumeClaim: etcdserver: request timed out; reinitializing...
	W1007 12:27:20.783512       1 reflector.go:561] storage/cacher.go:/mutatingwebhookconfigurations: failed to list *admissionregistration.MutatingWebhookConfiguration: etcdserver: request timed out
	E1007 12:27:20.825815       1 cacher.go:478] cacher (mutatingwebhookconfigurations.admissionregistration.k8s.io): unexpected ListAndWatch error: failed to list *admissionregistration.MutatingWebhookConfiguration: etcdserver: request timed out; reinitializing...
	W1007 12:27:20.783814       1 reflector.go:561] storage/cacher.go:/flowschemas: failed to list *flowcontrol.FlowSchema: etcdserver: request timed out
	E1007 12:27:20.825823       1 cacher.go:478] cacher (flowschemas.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.FlowSchema: etcdserver: request timed out; reinitializing...
	W1007 12:27:20.783837       1 reflector.go:561] storage/cacher.go:/replicasets: failed to list *apps.ReplicaSet: etcdserver: request timed out
	E1007 12:27:20.825831       1 cacher.go:478] cacher (replicasets.apps): unexpected ListAndWatch error: failed to list *apps.ReplicaSet: etcdserver: request timed out; reinitializing...
	W1007 12:27:20.796928       1 reflector.go:561] storage/cacher.go:/persistentvolumes: failed to list *core.PersistentVolume: etcdserver: request timed out
	E1007 12:27:20.825839       1 cacher.go:478] cacher (persistentvolumes): unexpected ListAndWatch error: failed to list *core.PersistentVolume: etcdserver: request timed out; reinitializing...
	W1007 12:27:20.783388       1 reflector.go:561] storage/cacher.go:/horizontalpodautoscalers: failed to list *autoscaling.HorizontalPodAutoscaler: etcdserver: request timed out
	E1007 12:27:20.825848       1 cacher.go:478] cacher (horizontalpodautoscalers.autoscaling): unexpected ListAndWatch error: failed to list *autoscaling.HorizontalPodAutoscaler: etcdserver: request timed out; reinitializing...
	
	
	==> kube-controller-manager [225f3bec737bfa85e116bde94fa78421a9c6ed6155b02d6687349eded1294e2c] <==
	E1007 12:27:27.160184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.110:8443/api/v1/pods?resourceVersion=2452\": dial tcp 192.168.39.110:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:27:28.351505       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.110:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.110:8443: connect: connection refused
	E1007 12:27:28.351589       1 node_lifecycle_controller.go:978] "Error updating node" err="Put \"https://192.168.39.110:8443/api/v1/nodes/ha-628553/status\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node="ha-628553"
	W1007 12:27:28.352017       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.110:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.110:8443: connect: connection refused
	W1007 12:27:28.412956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.110:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=2422": dial tcp 192.168.39.110:8443: connect: connection refused
	E1007 12:27:28.413081       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.110:8443/apis/policy/v1/poddisruptionbudgets?resourceVersion=2422\": dial tcp 192.168.39.110:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:27:28.853426       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.110:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.110:8443: connect: connection refused
	W1007 12:27:29.357428       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://192.168.39.110:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2430": dial tcp 192.168.39.110:8443: connect: connection refused
	E1007 12:27:29.357471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://192.168.39.110:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2430\": dial tcp 192.168.39.110:8443: connect: connection refused" logger="UnhandledError"
	E1007 12:27:29.623384       1 gc_controller.go:151] "Failed to get node" err="node \"ha-628553-m03\" not found" logger="pod-garbage-collector-controller" node="ha-628553-m03"
	E1007 12:27:29.623409       1 gc_controller.go:151] "Failed to get node" err="node \"ha-628553-m03\" not found" logger="pod-garbage-collector-controller" node="ha-628553-m03"
	E1007 12:27:29.623416       1 gc_controller.go:151] "Failed to get node" err="node \"ha-628553-m03\" not found" logger="pod-garbage-collector-controller" node="ha-628553-m03"
	E1007 12:27:29.623429       1 gc_controller.go:151] "Failed to get node" err="node \"ha-628553-m03\" not found" logger="pod-garbage-collector-controller" node="ha-628553-m03"
	E1007 12:27:29.623435       1 gc_controller.go:151] "Failed to get node" err="node \"ha-628553-m03\" not found" logger="pod-garbage-collector-controller" node="ha-628553-m03"
	W1007 12:27:29.624080       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.110:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.110:8443: connect: connection refused
	W1007 12:27:29.652360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.110:8443/api/v1/persistentvolumes?resourceVersion=2424": dial tcp 192.168.39.110:8443: connect: connection refused
	E1007 12:27:29.652416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.110:8443/api/v1/persistentvolumes?resourceVersion=2424\": dial tcp 192.168.39.110:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:27:29.854589       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.110:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.110:8443: connect: connection refused
	W1007 12:27:29.861721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.110:8443/apis/apps/v1/statefulsets?resourceVersion=2427": dial tcp 192.168.39.110:8443: connect: connection refused
	E1007 12:27:29.861783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.110:8443/apis/apps/v1/statefulsets?resourceVersion=2427\": dial tcp 192.168.39.110:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:27:29.907356       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RoleBinding: Get "https://192.168.39.110:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=2422": dial tcp 192.168.39.110:8443: connect: connection refused
	E1007 12:27:29.907417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RoleBinding: failed to list *v1.RoleBinding: Get \"https://192.168.39.110:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?resourceVersion=2422\": dial tcp 192.168.39.110:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:27:29.986335       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Role: Get "https://192.168.39.110:8443/apis/rbac.authorization.k8s.io/v1/roles?resourceVersion=2430": dial tcp 192.168.39.110:8443: connect: connection refused
	E1007 12:27:29.986401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Role: failed to list *v1.Role: Get \"https://192.168.39.110:8443/apis/rbac.authorization.k8s.io/v1/roles?resourceVersion=2430\": dial tcp 192.168.39.110:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:27:30.125347       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.110:8443/api/v1/namespaces/kube-system/serviceaccounts/pod-garbage-collector": dial tcp 192.168.39.110:8443: connect: connection refused
	
	
	==> kube-controller-manager [f0a6976c1286a07048e803e2a844dc480948730521d22549e3eb0f742fbccc91] <==
	I1007 12:21:57.348998       1 serving.go:386] Generated self-signed cert in-memory
	I1007 12:21:58.168523       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1007 12:21:58.171722       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:21:58.173765       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1007 12:21:58.175148       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1007 12:21:58.175214       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1007 12:21:58.175302       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1007 12:22:25.846141       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-proxy [e98b7814517854b690eef4baa06ba056aa5af0f6fad15d83fb160d2962677836] <==
	W1007 12:25:41.064371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:25:41.064569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	E1007 12:25:41.064402       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-628553&resourceVersion=2463\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1007 12:25:44.135388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:25:44.135462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1007 12:25:53.351925       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:25:53.352131       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1007 12:25:53.352323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-628553&resourceVersion=2463": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:25:53.352449       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-628553&resourceVersion=2463\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1007 12:25:53.352863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:25:53.353000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1007 12:26:08.713822       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-628553&resourceVersion=2463": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:26:08.713913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-628553&resourceVersion=2463\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1007 12:26:11.785930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:26:11.786199       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1007 12:26:17.928738       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:26:17.929263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1007 12:26:39.431627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-628553&resourceVersion=2463": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:26:39.432760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-628553&resourceVersion=2463\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1007 12:26:45.577749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:26:45.578009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1007 12:27:00.936540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:27:00.937343       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W1007 12:27:28.584020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459": dial tcp 192.168.39.254:8443: connect: no route to host
	E1007 12:27:28.584181       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=2459\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [95fb6227eb362f9d9b97a269451c541ab7c49c72f67128bee5659d44d441d54d] <==
	E1007 12:27:04.976580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:27:05.857074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 12:27:05.857136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:27:06.383545       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 12:27:06.383614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:27:08.453425       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 12:27:08.453588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:27:09.386160       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 12:27:09.386267       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1007 12:27:09.975469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 12:27:09.975591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:27:10.841868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 12:27:10.841984       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:27:11.049342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 12:27:11.049403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:27:15.166082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 12:27:15.166235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:27:16.409895       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 12:27:16.410059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 12:27:17.621100       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 12:27:17.621140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:27:21.177362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.110:8443/api/v1/namespaces?resourceVersion=2420": dial tcp 192.168.39.110:8443: connect: connection refused
	E1007 12:27:21.177474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.110:8443/api/v1/namespaces?resourceVersion=2420\": dial tcp 192.168.39.110:8443: connect: connection refused" logger="UnhandledError"
	W1007 12:27:23.226085       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.110:8443/apis/apps/v1/replicasets?resourceVersion=2418": dial tcp 192.168.39.110:8443: connect: connection refused
	E1007 12:27:23.226154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.110:8443/apis/apps/v1/replicasets?resourceVersion=2418\": dial tcp 192.168.39.110:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kubelet <==
	Oct 07 12:27:13 ha-628553 kubelet[1050]: E1007 12:27:13.223434    1050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-628553.17fc2b39e83b13be  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-628553,UID:5f9e39492eb2c4bce38dd565366b0984,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-628553,},FirstTimestamp:2024-10-07 12:24:44.71274387 +0000 UTC m=+175.674651685,LastTimestamp:2024-10-07 12:24:44.71274387 +0000 UTC m=+175.674651685,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:
nil,ReportingController:kubelet,ReportingInstance:ha-628553,}"
	Oct 07 12:27:16 ha-628553 kubelet[1050]: I1007 12:27:16.295015    1050 status_manager.go:851] "Failed to get status for pod" podUID="f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Oct 07 12:27:19 ha-628553 kubelet[1050]: W1007 12:27:19.367087    1050 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2455": dial tcp 192.168.39.254:8443: connect: no route to host
	Oct 07 12:27:19 ha-628553 kubelet[1050]: E1007 12:27:19.367207    1050 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=2455\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Oct 07 12:27:19 ha-628553 kubelet[1050]: I1007 12:27:19.367322    1050 status_manager.go:851] "Failed to get status for pod" podUID="0cb7efd98e1775704789a8938bb7525f" pod="kube-system/etcd-ha-628553" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-628553\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Oct 07 12:27:19 ha-628553 kubelet[1050]: E1007 12:27:19.392424    1050 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304039392085914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:27:19 ha-628553 kubelet[1050]: E1007 12:27:19.392477    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304039392085914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:27:21 ha-628553 kubelet[1050]: I1007 12:27:21.790507    1050 scope.go:117] "RemoveContainer" containerID="e9e8270b7e13c41f1067c7a2b2c48735878a1ac270029bf9bc40d0cf539e6ab4"
	Oct 07 12:27:21 ha-628553 kubelet[1050]: I1007 12:27:21.790951    1050 scope.go:117] "RemoveContainer" containerID="606bb92353e8608947ca6c5edaaeb447dd03f364d07574451a62d7ddd1de7b44"
	Oct 07 12:27:21 ha-628553 kubelet[1050]: E1007 12:27:21.791225    1050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-628553_kube-system(5f9e39492eb2c4bce38dd565366b0984)\"" pod="kube-system/kube-apiserver-ha-628553" podUID="5f9e39492eb2c4bce38dd565366b0984"
	Oct 07 12:27:22 ha-628553 kubelet[1050]: E1007 12:27:22.439298    1050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-628553?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Oct 07 12:27:22 ha-628553 kubelet[1050]: I1007 12:27:22.439300    1050 status_manager.go:851] "Failed to get status for pod" podUID="5f9e39492eb2c4bce38dd565366b0984" pod="kube-system/kube-apiserver-ha-628553" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-628553\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Oct 07 12:27:25 ha-628553 kubelet[1050]: I1007 12:27:25.293190    1050 scope.go:117] "RemoveContainer" containerID="606bb92353e8608947ca6c5edaaeb447dd03f364d07574451a62d7ddd1de7b44"
	Oct 07 12:27:25 ha-628553 kubelet[1050]: E1007 12:27:25.293768    1050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-628553_kube-system(5f9e39492eb2c4bce38dd565366b0984)\"" pod="kube-system/kube-apiserver-ha-628553" podUID="5f9e39492eb2c4bce38dd565366b0984"
	Oct 07 12:27:25 ha-628553 kubelet[1050]: W1007 12:27:25.511267    1050 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2455": dial tcp 192.168.39.254:8443: connect: no route to host
	Oct 07 12:27:25 ha-628553 kubelet[1050]: E1007 12:27:25.511364    1050 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=2455\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Oct 07 12:27:25 ha-628553 kubelet[1050]: I1007 12:27:25.511448    1050 status_manager.go:851] "Failed to get status for pod" podUID="f7c00c1a-1a68-4b5d-b870-6fb9aa3780f0" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Oct 07 12:27:25 ha-628553 kubelet[1050]: E1007 12:27:25.511269    1050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-628553.17fc2b39e83b13be  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-628553,UID:5f9e39492eb2c4bce38dd565366b0984,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-628553,},FirstTimestamp:2024-10-07 12:24:44.71274387 +0000 UTC m=+175.674651685,LastTimestamp:2024-10-07 12:24:44.71274387 +0000 UTC m=+175.674651685,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:
nil,ReportingController:kubelet,ReportingInstance:ha-628553,}"
	Oct 07 12:27:28 ha-628553 kubelet[1050]: W1007 12:27:28.583357    1050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-628553&resourceVersion=2455": dial tcp 192.168.39.254:8443: connect: no route to host
	Oct 07 12:27:28 ha-628553 kubelet[1050]: E1007 12:27:28.583458    1050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-628553&resourceVersion=2455\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	Oct 07 12:27:28 ha-628553 kubelet[1050]: I1007 12:27:28.584025    1050 status_manager.go:851] "Failed to get status for pod" podUID="9df0eeae4932743e946b9f74b4181463" pod="kube-system/kube-vip-ha-628553" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-vip-ha-628553\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Oct 07 12:27:29 ha-628553 kubelet[1050]: E1007 12:27:29.395016    1050 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304049394434428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:27:29 ha-628553 kubelet[1050]: E1007 12:27:29.395052    1050 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728304049394434428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 07 12:27:29 ha-628553 kubelet[1050]: I1007 12:27:29.774205    1050 scope.go:117] "RemoveContainer" containerID="606bb92353e8608947ca6c5edaaeb447dd03f364d07574451a62d7ddd1de7b44"
	Oct 07 12:27:29 ha-628553 kubelet[1050]: E1007 12:27:29.774474    1050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-628553_kube-system(5f9e39492eb2c4bce38dd565366b0984)\"" pod="kube-system/kube-apiserver-ha-628553" podUID="5f9e39492eb2c4bce38dd565366b0984"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-628553 -n ha-628553
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-628553 -n ha-628553: exit status 2 (249.131834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-628553" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (173.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (332.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-263097
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-263097
E1007 12:41:42.462800  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-263097: exit status 82 (2m1.969305529s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-263097-m03"  ...
	* Stopping node "multinode-263097-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-263097" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-263097 --wait=true -v=8 --alsologtostderr
E1007 12:44:44.455211  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:44:45.528784  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:45:01.380533  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-263097 --wait=true -v=8 --alsologtostderr: (3m27.641252434s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-263097
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-263097 -n multinode-263097
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 logs -n 25
E1007 12:46:42.463125  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-263097 logs -n 25: (2.200543334s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m02:/home/docker/cp-test.txt                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3309803868/001/cp-test_multinode-263097-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m02:/home/docker/cp-test.txt                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097:/home/docker/cp-test_multinode-263097-m02_multinode-263097.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n multinode-263097 sudo cat                                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /home/docker/cp-test_multinode-263097-m02_multinode-263097.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m02:/home/docker/cp-test.txt                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03:/home/docker/cp-test_multinode-263097-m02_multinode-263097-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n multinode-263097-m03 sudo cat                                   | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /home/docker/cp-test_multinode-263097-m02_multinode-263097-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp testdata/cp-test.txt                                                | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m03:/home/docker/cp-test.txt                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3309803868/001/cp-test_multinode-263097-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m03:/home/docker/cp-test.txt                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097:/home/docker/cp-test_multinode-263097-m03_multinode-263097.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n multinode-263097 sudo cat                                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /home/docker/cp-test_multinode-263097-m03_multinode-263097.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m03:/home/docker/cp-test.txt                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m02:/home/docker/cp-test_multinode-263097-m03_multinode-263097-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n multinode-263097-m02 sudo cat                                   | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /home/docker/cp-test_multinode-263097-m03_multinode-263097-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-263097 node stop m03                                                          | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	| node    | multinode-263097 node start                                                             | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:41 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-263097                                                                | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:41 UTC |                     |
	| stop    | -p multinode-263097                                                                     | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:41 UTC |                     |
	| start   | -p multinode-263097                                                                     | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:43 UTC | 07 Oct 24 12:46 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-263097                                                                | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:46 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:43:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:43:14.159864  420401 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:43:14.160148  420401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:43:14.160157  420401 out.go:358] Setting ErrFile to fd 2...
	I1007 12:43:14.160161  420401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:43:14.160377  420401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:43:14.160997  420401 out.go:352] Setting JSON to false
	I1007 12:43:14.162036  420401 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8740,"bootTime":1728296254,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:43:14.162167  420401 start.go:139] virtualization: kvm guest
	I1007 12:43:14.164687  420401 out.go:177] * [multinode-263097] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:43:14.166426  420401 notify.go:220] Checking for updates...
	I1007 12:43:14.166453  420401 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:43:14.168029  420401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:43:14.169597  420401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:43:14.171125  420401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:43:14.172634  420401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:43:14.173948  420401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:43:14.175659  420401 config.go:182] Loaded profile config "multinode-263097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:43:14.175763  420401 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:43:14.176243  420401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:43:14.176317  420401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:43:14.192593  420401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35673
	I1007 12:43:14.193112  420401 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:43:14.193755  420401 main.go:141] libmachine: Using API Version  1
	I1007 12:43:14.193783  420401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:43:14.194191  420401 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:43:14.194484  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:43:14.233958  420401 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 12:43:14.235420  420401 start.go:297] selected driver: kvm2
	I1007 12:43:14.235442  420401 start.go:901] validating driver "kvm2" against &{Name:multinode-263097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-263097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:43:14.235597  420401 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:43:14.236037  420401 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:43:14.236136  420401 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:43:14.251846  420401 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:43:14.252713  420401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:43:14.252766  420401 cni.go:84] Creating CNI manager for ""
	I1007 12:43:14.252843  420401 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 12:43:14.252920  420401 start.go:340] cluster config:
	{Name:multinode-263097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-263097 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubefl
ow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:43:14.253083  420401 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:43:14.254978  420401 out.go:177] * Starting "multinode-263097" primary control-plane node in "multinode-263097" cluster
	I1007 12:43:14.256363  420401 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:43:14.256400  420401 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:43:14.256407  420401 cache.go:56] Caching tarball of preloaded images
	I1007 12:43:14.256558  420401 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:43:14.256573  420401 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:43:14.256693  420401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/config.json ...
	I1007 12:43:14.256903  420401 start.go:360] acquireMachinesLock for multinode-263097: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:43:14.256954  420401 start.go:364] duration metric: took 32.229µs to acquireMachinesLock for "multinode-263097"
	I1007 12:43:14.256979  420401 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:43:14.256986  420401 fix.go:54] fixHost starting: 
	I1007 12:43:14.257256  420401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:43:14.257289  420401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:43:14.272815  420401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I1007 12:43:14.273236  420401 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:43:14.273733  420401 main.go:141] libmachine: Using API Version  1
	I1007 12:43:14.273758  420401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:43:14.274129  420401 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:43:14.274288  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:43:14.274445  420401 main.go:141] libmachine: (multinode-263097) Calling .GetState
	I1007 12:43:14.275982  420401 fix.go:112] recreateIfNeeded on multinode-263097: state=Running err=<nil>
	W1007 12:43:14.276002  420401 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:43:14.278038  420401 out.go:177] * Updating the running kvm2 "multinode-263097" VM ...
	I1007 12:43:14.279291  420401 machine.go:93] provisionDockerMachine start ...
	I1007 12:43:14.279317  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:43:14.279545  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.282056  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.282431  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.282479  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.282649  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:43:14.282816  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.283079  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.283231  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:43:14.283393  420401 main.go:141] libmachine: Using SSH client type: native
	I1007 12:43:14.283612  420401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 12:43:14.283625  420401 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:43:14.388784  420401 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-263097
	
	I1007 12:43:14.388822  420401 main.go:141] libmachine: (multinode-263097) Calling .GetMachineName
	I1007 12:43:14.389110  420401 buildroot.go:166] provisioning hostname "multinode-263097"
	I1007 12:43:14.389145  420401 main.go:141] libmachine: (multinode-263097) Calling .GetMachineName
	I1007 12:43:14.389392  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.392145  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.392687  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.392718  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.392834  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:43:14.393005  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.393204  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.393391  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:43:14.393543  420401 main.go:141] libmachine: Using SSH client type: native
	I1007 12:43:14.393722  420401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 12:43:14.393733  420401 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-263097 && echo "multinode-263097" | sudo tee /etc/hostname
	I1007 12:43:14.508900  420401 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-263097
	
	I1007 12:43:14.508930  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.511492  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.511826  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.511855  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.512076  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:43:14.512257  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.512413  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.512531  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:43:14.512685  420401 main.go:141] libmachine: Using SSH client type: native
	I1007 12:43:14.512865  420401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 12:43:14.512880  420401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-263097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-263097/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-263097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:43:14.612152  420401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:43:14.612187  420401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:43:14.612268  420401 buildroot.go:174] setting up certificates
	I1007 12:43:14.612278  420401 provision.go:84] configureAuth start
	I1007 12:43:14.612291  420401 main.go:141] libmachine: (multinode-263097) Calling .GetMachineName
	I1007 12:43:14.612578  420401 main.go:141] libmachine: (multinode-263097) Calling .GetIP
	I1007 12:43:14.615530  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.615948  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.615977  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.616100  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.618406  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.618682  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.618725  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.618833  420401 provision.go:143] copyHostCerts
	I1007 12:43:14.618871  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:43:14.618917  420401 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:43:14.619041  420401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:43:14.619151  420401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:43:14.619292  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:43:14.619321  420401 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:43:14.619331  420401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:43:14.619375  420401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:43:14.619442  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:43:14.619459  420401 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:43:14.619465  420401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:43:14.619494  420401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:43:14.619556  420401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.multinode-263097 san=[127.0.0.1 192.168.39.213 localhost minikube multinode-263097]
	I1007 12:43:14.807913  420401 provision.go:177] copyRemoteCerts
	I1007 12:43:14.807983  420401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:43:14.808011  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.810757  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.811135  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.811166  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.811339  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:43:14.811526  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.811652  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:43:14.811762  420401 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/multinode-263097/id_rsa Username:docker}
	I1007 12:43:14.895014  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:43:14.895138  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:43:14.922525  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:43:14.922614  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1007 12:43:14.949002  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:43:14.949103  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:43:14.977423  420401 provision.go:87] duration metric: took 365.128305ms to configureAuth
	I1007 12:43:14.977464  420401 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:43:14.977693  420401 config.go:182] Loaded profile config "multinode-263097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:43:14.977776  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.980869  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.981268  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.981295  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.981566  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:43:14.981740  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.981889  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.982023  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:43:14.982209  420401 main.go:141] libmachine: Using SSH client type: native
	I1007 12:43:14.982390  420401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 12:43:14.982412  420401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:44:45.672644  420401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:44:45.672682  420401 machine.go:96] duration metric: took 1m31.39337225s to provisionDockerMachine
	I1007 12:44:45.672702  420401 start.go:293] postStartSetup for "multinode-263097" (driver="kvm2")
	I1007 12:44:45.672726  420401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:44:45.672777  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:44:45.673149  420401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:44:45.673192  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:44:45.676614  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.677095  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:45.677125  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.677257  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:44:45.677455  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:44:45.677580  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:44:45.677750  420401 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/multinode-263097/id_rsa Username:docker}
	I1007 12:44:45.759356  420401 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:44:45.764202  420401 command_runner.go:130] > NAME=Buildroot
	I1007 12:44:45.764227  420401 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1007 12:44:45.764240  420401 command_runner.go:130] > ID=buildroot
	I1007 12:44:45.764258  420401 command_runner.go:130] > VERSION_ID=2023.02.9
	I1007 12:44:45.764264  420401 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1007 12:44:45.764292  420401 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:44:45.764308  420401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:44:45.764377  420401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:44:45.764449  420401 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:44:45.764476  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:44:45.764563  420401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:44:45.774741  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:44:45.801707  420401 start.go:296] duration metric: took 128.987277ms for postStartSetup
	I1007 12:44:45.801757  420401 fix.go:56] duration metric: took 1m31.544771096s for fixHost
	I1007 12:44:45.801784  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:44:45.804991  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.805385  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:45.805419  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.805599  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:44:45.805813  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:44:45.805927  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:44:45.806093  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:44:45.806268  420401 main.go:141] libmachine: Using SSH client type: native
	I1007 12:44:45.806492  420401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 12:44:45.806504  420401 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:44:45.908112  420401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728305085.881718886
	
	I1007 12:44:45.908134  420401 fix.go:216] guest clock: 1728305085.881718886
	I1007 12:44:45.908142  420401 fix.go:229] Guest: 2024-10-07 12:44:45.881718886 +0000 UTC Remote: 2024-10-07 12:44:45.801762257 +0000 UTC m=+91.685549591 (delta=79.956629ms)
	I1007 12:44:45.908188  420401 fix.go:200] guest clock delta is within tolerance: 79.956629ms
	I1007 12:44:45.908197  420401 start.go:83] releasing machines lock for "multinode-263097", held for 1m31.651222907s
	I1007 12:44:45.908225  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:44:45.908459  420401 main.go:141] libmachine: (multinode-263097) Calling .GetIP
	I1007 12:44:45.911342  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.911659  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:45.911685  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.911926  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:44:45.912485  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:44:45.912665  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:44:45.912793  420401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:44:45.912838  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:44:45.912872  420401 ssh_runner.go:195] Run: cat /version.json
	I1007 12:44:45.912895  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:44:45.915733  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.915838  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.916135  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:45.916162  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.916276  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:44:45.916407  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:45.916429  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.916416  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:44:45.916592  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:44:45.916595  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:44:45.916787  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:44:45.916802  420401 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/multinode-263097/id_rsa Username:docker}
	I1007 12:44:45.916892  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:44:45.917038  420401 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/multinode-263097/id_rsa Username:docker}
	I1007 12:44:45.988274  420401 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1007 12:44:45.988452  420401 ssh_runner.go:195] Run: systemctl --version
	I1007 12:44:46.017878  420401 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1007 12:44:46.018640  420401 command_runner.go:130] > systemd 252 (252)
	I1007 12:44:46.018677  420401 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1007 12:44:46.018749  420401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:44:46.183205  420401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 12:44:46.189510  420401 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1007 12:44:46.189558  420401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:44:46.189630  420401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:44:46.199513  420401 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 12:44:46.199552  420401 start.go:495] detecting cgroup driver to use...
	I1007 12:44:46.199664  420401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:44:46.216583  420401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:44:46.231367  420401 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:44:46.231428  420401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:44:46.246225  420401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:44:46.260758  420401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:44:46.411632  420401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:44:46.560393  420401 docker.go:233] disabling docker service ...
	I1007 12:44:46.560497  420401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:44:46.584668  420401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:44:46.601279  420401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:44:46.753673  420401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:44:46.915970  420401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:44:46.931191  420401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:44:46.951574  420401 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1007 12:44:46.951623  420401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:44:46.951678  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:46.963117  420401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:44:46.963210  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:46.974629  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:46.985874  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:46.998127  420401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:44:47.010474  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:47.022586  420401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:47.035715  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:47.048449  420401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:44:47.059510  420401 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1007 12:44:47.059673  420401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:44:47.070900  420401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:44:47.231249  420401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:44:56.198259  420401 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.966956254s)
	I1007 12:44:56.198299  420401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:44:56.198360  420401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:44:56.203586  420401 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1007 12:44:56.203618  420401 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1007 12:44:56.203628  420401 command_runner.go:130] > Device: 0,22	Inode: 1313        Links: 1
	I1007 12:44:56.203638  420401 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 12:44:56.203646  420401 command_runner.go:130] > Access: 2024-10-07 12:44:56.053199551 +0000
	I1007 12:44:56.203655  420401 command_runner.go:130] > Modify: 2024-10-07 12:44:56.053199551 +0000
	I1007 12:44:56.203663  420401 command_runner.go:130] > Change: 2024-10-07 12:44:56.053199551 +0000
	I1007 12:44:56.203668  420401 command_runner.go:130] >  Birth: -
	I1007 12:44:56.203710  420401 start.go:563] Will wait 60s for crictl version
	I1007 12:44:56.203774  420401 ssh_runner.go:195] Run: which crictl
	I1007 12:44:56.207546  420401 command_runner.go:130] > /usr/bin/crictl
	I1007 12:44:56.207782  420401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:44:56.252849  420401 command_runner.go:130] > Version:  0.1.0
	I1007 12:44:56.252877  420401 command_runner.go:130] > RuntimeName:  cri-o
	I1007 12:44:56.252881  420401 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1007 12:44:56.252886  420401 command_runner.go:130] > RuntimeApiVersion:  v1
	I1007 12:44:56.252944  420401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:44:56.253064  420401 ssh_runner.go:195] Run: crio --version
	I1007 12:44:56.285242  420401 command_runner.go:130] > crio version 1.29.1
	I1007 12:44:56.285265  420401 command_runner.go:130] > Version:        1.29.1
	I1007 12:44:56.285271  420401 command_runner.go:130] > GitCommit:      unknown
	I1007 12:44:56.285276  420401 command_runner.go:130] > GitCommitDate:  unknown
	I1007 12:44:56.285280  420401 command_runner.go:130] > GitTreeState:   clean
	I1007 12:44:56.285285  420401 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 12:44:56.285290  420401 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 12:44:56.285294  420401 command_runner.go:130] > Compiler:       gc
	I1007 12:44:56.285298  420401 command_runner.go:130] > Platform:       linux/amd64
	I1007 12:44:56.285302  420401 command_runner.go:130] > Linkmode:       dynamic
	I1007 12:44:56.285307  420401 command_runner.go:130] > BuildTags:      
	I1007 12:44:56.285311  420401 command_runner.go:130] >   containers_image_ostree_stub
	I1007 12:44:56.285315  420401 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 12:44:56.285318  420401 command_runner.go:130] >   btrfs_noversion
	I1007 12:44:56.285325  420401 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 12:44:56.285331  420401 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 12:44:56.285336  420401 command_runner.go:130] >   seccomp
	I1007 12:44:56.285343  420401 command_runner.go:130] > LDFlags:          unknown
	I1007 12:44:56.285350  420401 command_runner.go:130] > SeccompEnabled:   true
	I1007 12:44:56.285360  420401 command_runner.go:130] > AppArmorEnabled:  false
	I1007 12:44:56.285438  420401 ssh_runner.go:195] Run: crio --version
	I1007 12:44:56.321297  420401 command_runner.go:130] > crio version 1.29.1
	I1007 12:44:56.321328  420401 command_runner.go:130] > Version:        1.29.1
	I1007 12:44:56.321337  420401 command_runner.go:130] > GitCommit:      unknown
	I1007 12:44:56.321344  420401 command_runner.go:130] > GitCommitDate:  unknown
	I1007 12:44:56.321350  420401 command_runner.go:130] > GitTreeState:   clean
	I1007 12:44:56.321358  420401 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 12:44:56.321365  420401 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 12:44:56.321371  420401 command_runner.go:130] > Compiler:       gc
	I1007 12:44:56.321377  420401 command_runner.go:130] > Platform:       linux/amd64
	I1007 12:44:56.321381  420401 command_runner.go:130] > Linkmode:       dynamic
	I1007 12:44:56.321386  420401 command_runner.go:130] > BuildTags:      
	I1007 12:44:56.321393  420401 command_runner.go:130] >   containers_image_ostree_stub
	I1007 12:44:56.321397  420401 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 12:44:56.321401  420401 command_runner.go:130] >   btrfs_noversion
	I1007 12:44:56.321405  420401 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 12:44:56.321409  420401 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 12:44:56.321418  420401 command_runner.go:130] >   seccomp
	I1007 12:44:56.321425  420401 command_runner.go:130] > LDFlags:          unknown
	I1007 12:44:56.321429  420401 command_runner.go:130] > SeccompEnabled:   true
	I1007 12:44:56.321434  420401 command_runner.go:130] > AppArmorEnabled:  false
	I1007 12:44:56.324227  420401 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:44:56.325702  420401 main.go:141] libmachine: (multinode-263097) Calling .GetIP
	I1007 12:44:56.328455  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:56.328859  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:56.328888  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:56.329081  420401 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:44:56.333832  420401 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1007 12:44:56.333965  420401 kubeadm.go:883] updating cluster {Name:multinode-263097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-263097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gad
get:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:44:56.334103  420401 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:44:56.334153  420401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:44:56.382039  420401 command_runner.go:130] > {
	I1007 12:44:56.382066  420401 command_runner.go:130] >   "images": [
	I1007 12:44:56.382073  420401 command_runner.go:130] >     {
	I1007 12:44:56.382084  420401 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 12:44:56.382094  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382103  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 12:44:56.382109  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382115  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382131  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 12:44:56.382142  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 12:44:56.382152  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382159  420401 command_runner.go:130] >       "size": "87190579",
	I1007 12:44:56.382167  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.382173  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382186  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382195  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382201  420401 command_runner.go:130] >     },
	I1007 12:44:56.382206  420401 command_runner.go:130] >     {
	I1007 12:44:56.382233  420401 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 12:44:56.382242  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382251  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 12:44:56.382259  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382266  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382281  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 12:44:56.382294  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 12:44:56.382303  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382313  420401 command_runner.go:130] >       "size": "1363676",
	I1007 12:44:56.382322  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.382334  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382343  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382349  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382358  420401 command_runner.go:130] >     },
	I1007 12:44:56.382363  420401 command_runner.go:130] >     {
	I1007 12:44:56.382375  420401 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 12:44:56.382384  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382393  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 12:44:56.382402  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382414  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382428  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 12:44:56.382443  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 12:44:56.382449  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382453  420401 command_runner.go:130] >       "size": "31470524",
	I1007 12:44:56.382462  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.382467  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382473  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382477  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382480  420401 command_runner.go:130] >     },
	I1007 12:44:56.382484  420401 command_runner.go:130] >     {
	I1007 12:44:56.382490  420401 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 12:44:56.382496  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382501  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 12:44:56.382507  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382510  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382518  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 12:44:56.382530  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 12:44:56.382536  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382540  420401 command_runner.go:130] >       "size": "63273227",
	I1007 12:44:56.382546  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.382553  420401 command_runner.go:130] >       "username": "nonroot",
	I1007 12:44:56.382557  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382563  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382566  420401 command_runner.go:130] >     },
	I1007 12:44:56.382570  420401 command_runner.go:130] >     {
	I1007 12:44:56.382580  420401 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 12:44:56.382584  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382589  420401 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 12:44:56.382594  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382598  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382605  420401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 12:44:56.382614  420401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 12:44:56.382620  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382624  420401 command_runner.go:130] >       "size": "149009664",
	I1007 12:44:56.382628  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.382632  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.382635  420401 command_runner.go:130] >       },
	I1007 12:44:56.382639  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382643  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382647  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382650  420401 command_runner.go:130] >     },
	I1007 12:44:56.382654  420401 command_runner.go:130] >     {
	I1007 12:44:56.382660  420401 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 12:44:56.382666  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382671  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 12:44:56.382676  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382680  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382690  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 12:44:56.382699  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 12:44:56.382703  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382710  420401 command_runner.go:130] >       "size": "95237600",
	I1007 12:44:56.382714  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.382719  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.382723  420401 command_runner.go:130] >       },
	I1007 12:44:56.382729  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382732  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382736  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382741  420401 command_runner.go:130] >     },
	I1007 12:44:56.382744  420401 command_runner.go:130] >     {
	I1007 12:44:56.382750  420401 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 12:44:56.382756  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382761  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 12:44:56.382767  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382771  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382780  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 12:44:56.382789  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 12:44:56.382795  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382798  420401 command_runner.go:130] >       "size": "89437508",
	I1007 12:44:56.382802  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.382806  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.382810  420401 command_runner.go:130] >       },
	I1007 12:44:56.382813  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382817  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382821  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382825  420401 command_runner.go:130] >     },
	I1007 12:44:56.382828  420401 command_runner.go:130] >     {
	I1007 12:44:56.382834  420401 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 12:44:56.382840  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382845  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 12:44:56.382848  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382852  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382866  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 12:44:56.382876  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 12:44:56.382880  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382884  420401 command_runner.go:130] >       "size": "92733849",
	I1007 12:44:56.382889  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.382893  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382896  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382900  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382903  420401 command_runner.go:130] >     },
	I1007 12:44:56.382906  420401 command_runner.go:130] >     {
	I1007 12:44:56.382912  420401 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 12:44:56.382916  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382920  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 12:44:56.382924  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382928  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382935  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 12:44:56.382942  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 12:44:56.382946  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382950  420401 command_runner.go:130] >       "size": "68420934",
	I1007 12:44:56.382953  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.382975  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.382981  420401 command_runner.go:130] >       },
	I1007 12:44:56.382988  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382994  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.383000  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.383004  420401 command_runner.go:130] >     },
	I1007 12:44:56.383008  420401 command_runner.go:130] >     {
	I1007 12:44:56.383013  420401 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 12:44:56.383017  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.383021  420401 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 12:44:56.383024  420401 command_runner.go:130] >       ],
	I1007 12:44:56.383028  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.383034  420401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 12:44:56.383041  420401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 12:44:56.383045  420401 command_runner.go:130] >       ],
	I1007 12:44:56.383049  420401 command_runner.go:130] >       "size": "742080",
	I1007 12:44:56.383053  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.383056  420401 command_runner.go:130] >         "value": "65535"
	I1007 12:44:56.383060  420401 command_runner.go:130] >       },
	I1007 12:44:56.383063  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.383067  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.383071  420401 command_runner.go:130] >       "pinned": true
	I1007 12:44:56.383074  420401 command_runner.go:130] >     }
	I1007 12:44:56.383078  420401 command_runner.go:130] >   ]
	I1007 12:44:56.383081  420401 command_runner.go:130] > }
	I1007 12:44:56.383315  420401 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:44:56.383332  420401 crio.go:433] Images already preloaded, skipping extraction
	I1007 12:44:56.383385  420401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:44:56.417688  420401 command_runner.go:130] > {
	I1007 12:44:56.417712  420401 command_runner.go:130] >   "images": [
	I1007 12:44:56.417718  420401 command_runner.go:130] >     {
	I1007 12:44:56.417729  420401 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 12:44:56.417736  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.417743  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 12:44:56.417748  420401 command_runner.go:130] >       ],
	I1007 12:44:56.417754  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.417765  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 12:44:56.417776  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 12:44:56.417785  420401 command_runner.go:130] >       ],
	I1007 12:44:56.417791  420401 command_runner.go:130] >       "size": "87190579",
	I1007 12:44:56.417798  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.417804  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.417812  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.417816  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.417821  420401 command_runner.go:130] >     },
	I1007 12:44:56.417826  420401 command_runner.go:130] >     {
	I1007 12:44:56.417833  420401 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 12:44:56.417837  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.417843  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 12:44:56.417849  420401 command_runner.go:130] >       ],
	I1007 12:44:56.417856  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.417872  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 12:44:56.417884  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 12:44:56.417889  420401 command_runner.go:130] >       ],
	I1007 12:44:56.417894  420401 command_runner.go:130] >       "size": "1363676",
	I1007 12:44:56.417914  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.417926  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.417934  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.417941  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.417945  420401 command_runner.go:130] >     },
	I1007 12:44:56.417949  420401 command_runner.go:130] >     {
	I1007 12:44:56.417955  420401 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 12:44:56.417959  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.417967  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 12:44:56.417973  420401 command_runner.go:130] >       ],
	I1007 12:44:56.417977  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.417989  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 12:44:56.418004  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 12:44:56.418013  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418023  420401 command_runner.go:130] >       "size": "31470524",
	I1007 12:44:56.418032  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.418040  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418054  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418061  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418064  420401 command_runner.go:130] >     },
	I1007 12:44:56.418068  420401 command_runner.go:130] >     {
	I1007 12:44:56.418074  420401 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 12:44:56.418082  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418090  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 12:44:56.418099  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418106  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418120  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 12:44:56.418139  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 12:44:56.418148  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418156  420401 command_runner.go:130] >       "size": "63273227",
	I1007 12:44:56.418164  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.418172  420401 command_runner.go:130] >       "username": "nonroot",
	I1007 12:44:56.418177  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418185  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418190  420401 command_runner.go:130] >     },
	I1007 12:44:56.418213  420401 command_runner.go:130] >     {
	I1007 12:44:56.418226  420401 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 12:44:56.418233  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418243  420401 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 12:44:56.418249  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418263  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418272  420401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 12:44:56.418284  420401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 12:44:56.418293  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418300  420401 command_runner.go:130] >       "size": "149009664",
	I1007 12:44:56.418310  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.418320  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.418328  420401 command_runner.go:130] >       },
	I1007 12:44:56.418336  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418345  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418354  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418361  420401 command_runner.go:130] >     },
	I1007 12:44:56.418365  420401 command_runner.go:130] >     {
	I1007 12:44:56.418376  420401 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 12:44:56.418385  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418396  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 12:44:56.418404  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418413  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418427  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 12:44:56.418441  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 12:44:56.418449  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418454  420401 command_runner.go:130] >       "size": "95237600",
	I1007 12:44:56.418463  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.418469  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.418478  420401 command_runner.go:130] >       },
	I1007 12:44:56.418485  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418493  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418503  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418509  420401 command_runner.go:130] >     },
	I1007 12:44:56.418518  420401 command_runner.go:130] >     {
	I1007 12:44:56.418528  420401 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 12:44:56.418538  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418547  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 12:44:56.418553  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418563  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418579  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 12:44:56.418594  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 12:44:56.418603  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418610  420401 command_runner.go:130] >       "size": "89437508",
	I1007 12:44:56.418619  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.418625  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.418633  420401 command_runner.go:130] >       },
	I1007 12:44:56.418638  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418643  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418650  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418658  420401 command_runner.go:130] >     },
	I1007 12:44:56.418663  420401 command_runner.go:130] >     {
	I1007 12:44:56.418673  420401 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 12:44:56.418682  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418690  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 12:44:56.418698  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418704  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418725  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 12:44:56.418738  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 12:44:56.418744  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418751  420401 command_runner.go:130] >       "size": "92733849",
	I1007 12:44:56.418758  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.418765  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418774  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418781  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418789  420401 command_runner.go:130] >     },
	I1007 12:44:56.418795  420401 command_runner.go:130] >     {
	I1007 12:44:56.418806  420401 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 12:44:56.418813  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418822  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 12:44:56.418828  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418835  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418849  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 12:44:56.418864  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 12:44:56.418869  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418879  420401 command_runner.go:130] >       "size": "68420934",
	I1007 12:44:56.418889  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.418896  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.418905  420401 command_runner.go:130] >       },
	I1007 12:44:56.418912  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418921  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418931  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418937  420401 command_runner.go:130] >     },
	I1007 12:44:56.418945  420401 command_runner.go:130] >     {
	I1007 12:44:56.418955  420401 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 12:44:56.418982  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418990  420401 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 12:44:56.418996  420401 command_runner.go:130] >       ],
	I1007 12:44:56.419005  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.419019  420401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 12:44:56.419033  420401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 12:44:56.419039  420401 command_runner.go:130] >       ],
	I1007 12:44:56.419045  420401 command_runner.go:130] >       "size": "742080",
	I1007 12:44:56.419053  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.419063  420401 command_runner.go:130] >         "value": "65535"
	I1007 12:44:56.419071  420401 command_runner.go:130] >       },
	I1007 12:44:56.419080  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.419089  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.419097  420401 command_runner.go:130] >       "pinned": true
	I1007 12:44:56.419105  420401 command_runner.go:130] >     }
	I1007 12:44:56.419111  420401 command_runner.go:130] >   ]
	I1007 12:44:56.419119  420401 command_runner.go:130] > }
	I1007 12:44:56.419318  420401 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:44:56.419335  420401 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:44:56.419344  420401 kubeadm.go:934] updating node { 192.168.39.213 8443 v1.31.1 crio true true} ...
	I1007 12:44:56.419468  420401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-263097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-263097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:44:56.419548  420401 ssh_runner.go:195] Run: crio config
	I1007 12:44:56.464279  420401 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1007 12:44:56.464309  420401 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1007 12:44:56.464316  420401 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1007 12:44:56.464319  420401 command_runner.go:130] > #
	I1007 12:44:56.464334  420401 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1007 12:44:56.464341  420401 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1007 12:44:56.464351  420401 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1007 12:44:56.464366  420401 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1007 12:44:56.464371  420401 command_runner.go:130] > # reload'.
	I1007 12:44:56.464380  420401 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1007 12:44:56.464389  420401 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1007 12:44:56.464398  420401 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1007 12:44:56.464407  420401 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1007 12:44:56.464412  420401 command_runner.go:130] > [crio]
	I1007 12:44:56.464423  420401 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1007 12:44:56.464432  420401 command_runner.go:130] > # containers images, in this directory.
	I1007 12:44:56.464439  420401 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1007 12:44:56.464448  420401 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1007 12:44:56.464453  420401 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1007 12:44:56.464464  420401 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1007 12:44:56.464469  420401 command_runner.go:130] > # imagestore = ""
	I1007 12:44:56.464475  420401 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1007 12:44:56.464481  420401 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1007 12:44:56.464486  420401 command_runner.go:130] > storage_driver = "overlay"
	I1007 12:44:56.464492  420401 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1007 12:44:56.464498  420401 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1007 12:44:56.464502  420401 command_runner.go:130] > storage_option = [
	I1007 12:44:56.464572  420401 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1007 12:44:56.464586  420401 command_runner.go:130] > ]
	I1007 12:44:56.464596  420401 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1007 12:44:56.464602  420401 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1007 12:44:56.464836  420401 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1007 12:44:56.464853  420401 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1007 12:44:56.464863  420401 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1007 12:44:56.464868  420401 command_runner.go:130] > # always happen on a node reboot
	I1007 12:44:56.465129  420401 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1007 12:44:56.465181  420401 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1007 12:44:56.465197  420401 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1007 12:44:56.465204  420401 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1007 12:44:56.465328  420401 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1007 12:44:56.465345  420401 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1007 12:44:56.465358  420401 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1007 12:44:56.465573  420401 command_runner.go:130] > # internal_wipe = true
	I1007 12:44:56.465601  420401 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1007 12:44:56.465612  420401 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1007 12:44:56.465869  420401 command_runner.go:130] > # internal_repair = false
	I1007 12:44:56.465878  420401 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1007 12:44:56.465884  420401 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1007 12:44:56.465889  420401 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1007 12:44:56.466070  420401 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1007 12:44:56.466086  420401 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1007 12:44:56.466093  420401 command_runner.go:130] > [crio.api]
	I1007 12:44:56.466105  420401 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1007 12:44:56.466359  420401 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1007 12:44:56.466373  420401 command_runner.go:130] > # IP address on which the stream server will listen.
	I1007 12:44:56.466543  420401 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1007 12:44:56.466559  420401 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1007 12:44:56.466568  420401 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1007 12:44:56.466765  420401 command_runner.go:130] > # stream_port = "0"
	I1007 12:44:56.466779  420401 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1007 12:44:56.467012  420401 command_runner.go:130] > # stream_enable_tls = false
	I1007 12:44:56.467030  420401 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1007 12:44:56.467202  420401 command_runner.go:130] > # stream_idle_timeout = ""
	I1007 12:44:56.467218  420401 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1007 12:44:56.467229  420401 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1007 12:44:56.467235  420401 command_runner.go:130] > # minutes.
	I1007 12:44:56.467521  420401 command_runner.go:130] > # stream_tls_cert = ""
	I1007 12:44:56.467538  420401 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1007 12:44:56.467547  420401 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1007 12:44:56.467688  420401 command_runner.go:130] > # stream_tls_key = ""
	I1007 12:44:56.467699  420401 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1007 12:44:56.467705  420401 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1007 12:44:56.467720  420401 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1007 12:44:56.467856  420401 command_runner.go:130] > # stream_tls_ca = ""
	I1007 12:44:56.467875  420401 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 12:44:56.468080  420401 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1007 12:44:56.468093  420401 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 12:44:56.468193  420401 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1007 12:44:56.468208  420401 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1007 12:44:56.468218  420401 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1007 12:44:56.468227  420401 command_runner.go:130] > [crio.runtime]
	I1007 12:44:56.468234  420401 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1007 12:44:56.468241  420401 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1007 12:44:56.468246  420401 command_runner.go:130] > # "nofile=1024:2048"
	I1007 12:44:56.468277  420401 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1007 12:44:56.468345  420401 command_runner.go:130] > # default_ulimits = [
	I1007 12:44:56.468598  420401 command_runner.go:130] > # ]
	I1007 12:44:56.468613  420401 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1007 12:44:56.468787  420401 command_runner.go:130] > # no_pivot = false
	I1007 12:44:56.468809  420401 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1007 12:44:56.468819  420401 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1007 12:44:56.469035  420401 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1007 12:44:56.469049  420401 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1007 12:44:56.469054  420401 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1007 12:44:56.469073  420401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 12:44:56.469477  420401 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1007 12:44:56.469496  420401 command_runner.go:130] > # Cgroup setting for conmon
	I1007 12:44:56.469507  420401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1007 12:44:56.469632  420401 command_runner.go:130] > conmon_cgroup = "pod"
	I1007 12:44:56.469649  420401 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1007 12:44:56.469657  420401 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1007 12:44:56.469667  420401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 12:44:56.469676  420401 command_runner.go:130] > conmon_env = [
	I1007 12:44:56.469799  420401 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 12:44:56.469812  420401 command_runner.go:130] > ]
	I1007 12:44:56.469821  420401 command_runner.go:130] > # Additional environment variables to set for all the
	I1007 12:44:56.469829  420401 command_runner.go:130] > # containers. These are overridden if set in the
	I1007 12:44:56.469839  420401 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1007 12:44:56.469936  420401 command_runner.go:130] > # default_env = [
	I1007 12:44:56.470072  420401 command_runner.go:130] > # ]
	I1007 12:44:56.470086  420401 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1007 12:44:56.470098  420401 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1007 12:44:56.470399  420401 command_runner.go:130] > # selinux = false
	I1007 12:44:56.470412  420401 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1007 12:44:56.470422  420401 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1007 12:44:56.470432  420401 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1007 12:44:56.470691  420401 command_runner.go:130] > # seccomp_profile = ""
	I1007 12:44:56.470710  420401 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1007 12:44:56.470721  420401 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1007 12:44:56.470731  420401 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1007 12:44:56.470743  420401 command_runner.go:130] > # which might increase security.
	I1007 12:44:56.470751  420401 command_runner.go:130] > # This option is currently deprecated,
	I1007 12:44:56.470763  420401 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1007 12:44:56.470839  420401 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1007 12:44:56.470853  420401 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1007 12:44:56.470863  420401 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1007 12:44:56.470875  420401 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1007 12:44:56.470888  420401 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1007 12:44:56.470904  420401 command_runner.go:130] > # This option supports live configuration reload.
	I1007 12:44:56.471133  420401 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1007 12:44:56.471145  420401 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1007 12:44:56.471152  420401 command_runner.go:130] > # the cgroup blockio controller.
	I1007 12:44:56.471336  420401 command_runner.go:130] > # blockio_config_file = ""
	I1007 12:44:56.471350  420401 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1007 12:44:56.471357  420401 command_runner.go:130] > # blockio parameters.
	I1007 12:44:56.471518  420401 command_runner.go:130] > # blockio_reload = false
	I1007 12:44:56.471531  420401 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1007 12:44:56.471538  420401 command_runner.go:130] > # irqbalance daemon.
	I1007 12:44:56.472773  420401 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1007 12:44:56.472788  420401 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1007 12:44:56.472796  420401 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1007 12:44:56.472802  420401 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1007 12:44:56.472810  420401 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1007 12:44:56.472823  420401 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1007 12:44:56.472838  420401 command_runner.go:130] > # This option supports live configuration reload.
	I1007 12:44:56.472849  420401 command_runner.go:130] > # rdt_config_file = ""
	I1007 12:44:56.472857  420401 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1007 12:44:56.472870  420401 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1007 12:44:56.472889  420401 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1007 12:44:56.472898  420401 command_runner.go:130] > # separate_pull_cgroup = ""
	I1007 12:44:56.472911  420401 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1007 12:44:56.472924  420401 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1007 12:44:56.472934  420401 command_runner.go:130] > # will be added.
	I1007 12:44:56.472941  420401 command_runner.go:130] > # default_capabilities = [
	I1007 12:44:56.472950  420401 command_runner.go:130] > # 	"CHOWN",
	I1007 12:44:56.472959  420401 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1007 12:44:56.472967  420401 command_runner.go:130] > # 	"FSETID",
	I1007 12:44:56.472971  420401 command_runner.go:130] > # 	"FOWNER",
	I1007 12:44:56.472978  420401 command_runner.go:130] > # 	"SETGID",
	I1007 12:44:56.472984  420401 command_runner.go:130] > # 	"SETUID",
	I1007 12:44:56.472993  420401 command_runner.go:130] > # 	"SETPCAP",
	I1007 12:44:56.473002  420401 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1007 12:44:56.473009  420401 command_runner.go:130] > # 	"KILL",
	I1007 12:44:56.473017  420401 command_runner.go:130] > # ]
	I1007 12:44:56.473030  420401 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1007 12:44:56.473043  420401 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1007 12:44:56.473053  420401 command_runner.go:130] > # add_inheritable_capabilities = false
	I1007 12:44:56.473061  420401 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1007 12:44:56.473072  420401 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 12:44:56.473082  420401 command_runner.go:130] > default_sysctls = [
	I1007 12:44:56.473094  420401 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1007 12:44:56.473102  420401 command_runner.go:130] > ]
	I1007 12:44:56.473110  420401 command_runner.go:130] > # List of devices on the host that a
	I1007 12:44:56.473124  420401 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1007 12:44:56.473133  420401 command_runner.go:130] > # allowed_devices = [
	I1007 12:44:56.473141  420401 command_runner.go:130] > # 	"/dev/fuse",
	I1007 12:44:56.473144  420401 command_runner.go:130] > # ]
	I1007 12:44:56.473153  420401 command_runner.go:130] > # List of additional devices. specified as
	I1007 12:44:56.473169  420401 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1007 12:44:56.473180  420401 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1007 12:44:56.473192  420401 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 12:44:56.473201  420401 command_runner.go:130] > # additional_devices = [
	I1007 12:44:56.473207  420401 command_runner.go:130] > # ]
	I1007 12:44:56.473218  420401 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1007 12:44:56.473225  420401 command_runner.go:130] > # cdi_spec_dirs = [
	I1007 12:44:56.473229  420401 command_runner.go:130] > # 	"/etc/cdi",
	I1007 12:44:56.473237  420401 command_runner.go:130] > # 	"/var/run/cdi",
	I1007 12:44:56.473246  420401 command_runner.go:130] > # ]
	I1007 12:44:56.473258  420401 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1007 12:44:56.473271  420401 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1007 12:44:56.473280  420401 command_runner.go:130] > # Defaults to false.
	I1007 12:44:56.473291  420401 command_runner.go:130] > # device_ownership_from_security_context = false
	I1007 12:44:56.473304  420401 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1007 12:44:56.473312  420401 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1007 12:44:56.473320  420401 command_runner.go:130] > # hooks_dir = [
	I1007 12:44:56.473331  420401 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1007 12:44:56.473340  420401 command_runner.go:130] > # ]
	I1007 12:44:56.473352  420401 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1007 12:44:56.473365  420401 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1007 12:44:56.473375  420401 command_runner.go:130] > # its default mounts from the following two files:
	I1007 12:44:56.473384  420401 command_runner.go:130] > #
	I1007 12:44:56.473392  420401 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1007 12:44:56.473401  420401 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1007 12:44:56.473409  420401 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1007 12:44:56.473418  420401 command_runner.go:130] > #
	I1007 12:44:56.473428  420401 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1007 12:44:56.473440  420401 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1007 12:44:56.473453  420401 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1007 12:44:56.473460  420401 command_runner.go:130] > #      only add mounts it finds in this file.
	I1007 12:44:56.473468  420401 command_runner.go:130] > #
	I1007 12:44:56.473476  420401 command_runner.go:130] > # default_mounts_file = ""
	I1007 12:44:56.473484  420401 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1007 12:44:56.473514  420401 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1007 12:44:56.473524  420401 command_runner.go:130] > pids_limit = 1024
	I1007 12:44:56.473534  420401 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1007 12:44:56.473544  420401 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1007 12:44:56.473557  420401 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1007 12:44:56.473572  420401 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1007 12:44:56.473581  420401 command_runner.go:130] > # log_size_max = -1
	I1007 12:44:56.473596  420401 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1007 12:44:56.473605  420401 command_runner.go:130] > # log_to_journald = false
	I1007 12:44:56.473618  420401 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1007 12:44:56.473629  420401 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1007 12:44:56.473638  420401 command_runner.go:130] > # Path to directory for container attach sockets.
	I1007 12:44:56.473646  420401 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1007 12:44:56.473654  420401 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1007 12:44:56.473664  420401 command_runner.go:130] > # bind_mount_prefix = ""
	I1007 12:44:56.473676  420401 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1007 12:44:56.473686  420401 command_runner.go:130] > # read_only = false
	I1007 12:44:56.473698  420401 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1007 12:44:56.473709  420401 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1007 12:44:56.473718  420401 command_runner.go:130] > # live configuration reload.
	I1007 12:44:56.473727  420401 command_runner.go:130] > # log_level = "info"
	I1007 12:44:56.473735  420401 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1007 12:44:56.473743  420401 command_runner.go:130] > # This option supports live configuration reload.
	I1007 12:44:56.473754  420401 command_runner.go:130] > # log_filter = ""
	I1007 12:44:56.473766  420401 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1007 12:44:56.473778  420401 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1007 12:44:56.473787  420401 command_runner.go:130] > # separated by comma.
	I1007 12:44:56.473801  420401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 12:44:56.473810  420401 command_runner.go:130] > # uid_mappings = ""
	I1007 12:44:56.473816  420401 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1007 12:44:56.473826  420401 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1007 12:44:56.473837  420401 command_runner.go:130] > # separated by comma.
	I1007 12:44:56.473852  420401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 12:44:56.473861  420401 command_runner.go:130] > # gid_mappings = ""
	I1007 12:44:56.473875  420401 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1007 12:44:56.473888  420401 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 12:44:56.473900  420401 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 12:44:56.473913  420401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 12:44:56.473923  420401 command_runner.go:130] > # minimum_mappable_uid = -1
	I1007 12:44:56.473936  420401 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1007 12:44:56.473950  420401 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 12:44:56.473962  420401 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 12:44:56.473977  420401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 12:44:56.473984  420401 command_runner.go:130] > # minimum_mappable_gid = -1
	I1007 12:44:56.473990  420401 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1007 12:44:56.474002  420401 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1007 12:44:56.474015  420401 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1007 12:44:56.474024  420401 command_runner.go:130] > # ctr_stop_timeout = 30
	I1007 12:44:56.474036  420401 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1007 12:44:56.474048  420401 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1007 12:44:56.474059  420401 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1007 12:44:56.474067  420401 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1007 12:44:56.474071  420401 command_runner.go:130] > drop_infra_ctr = false
	I1007 12:44:56.474083  420401 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1007 12:44:56.474095  420401 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1007 12:44:56.474109  420401 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1007 12:44:56.474118  420401 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1007 12:44:56.474132  420401 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1007 12:44:56.474144  420401 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1007 12:44:56.474153  420401 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1007 12:44:56.474163  420401 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1007 12:44:56.474173  420401 command_runner.go:130] > # shared_cpuset = ""
	I1007 12:44:56.474185  420401 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1007 12:44:56.474196  420401 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1007 12:44:56.474207  420401 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1007 12:44:56.474217  420401 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1007 12:44:56.474226  420401 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1007 12:44:56.474235  420401 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1007 12:44:56.474242  420401 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1007 12:44:56.474252  420401 command_runner.go:130] > # enable_criu_support = false
	I1007 12:44:56.474263  420401 command_runner.go:130] > # Enable/disable the generation of the container,
	I1007 12:44:56.474276  420401 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1007 12:44:56.474285  420401 command_runner.go:130] > # enable_pod_events = false
	I1007 12:44:56.474298  420401 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 12:44:56.474310  420401 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 12:44:56.474319  420401 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1007 12:44:56.474326  420401 command_runner.go:130] > # default_runtime = "runc"
	I1007 12:44:56.474335  420401 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1007 12:44:56.474351  420401 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1007 12:44:56.474369  420401 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1007 12:44:56.474380  420401 command_runner.go:130] > # creation as a file is not desired either.
	I1007 12:44:56.474395  420401 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1007 12:44:56.474404  420401 command_runner.go:130] > # the hostname is being managed dynamically.
	I1007 12:44:56.474409  420401 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1007 12:44:56.474416  420401 command_runner.go:130] > # ]
	I1007 12:44:56.474426  420401 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1007 12:44:56.474439  420401 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1007 12:44:56.474451  420401 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1007 12:44:56.474463  420401 command_runner.go:130] > # Each entry in the table should follow the format:
	I1007 12:44:56.474470  420401 command_runner.go:130] > #
	I1007 12:44:56.474477  420401 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1007 12:44:56.474487  420401 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1007 12:44:56.474521  420401 command_runner.go:130] > # runtime_type = "oci"
	I1007 12:44:56.474533  420401 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1007 12:44:56.474540  420401 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1007 12:44:56.474548  420401 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1007 12:44:56.474555  420401 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1007 12:44:56.474566  420401 command_runner.go:130] > # monitor_env = []
	I1007 12:44:56.474573  420401 command_runner.go:130] > # privileged_without_host_devices = false
	I1007 12:44:56.474580  420401 command_runner.go:130] > # allowed_annotations = []
	I1007 12:44:56.474586  420401 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1007 12:44:56.474593  420401 command_runner.go:130] > # Where:
	I1007 12:44:56.474601  420401 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1007 12:44:56.474612  420401 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1007 12:44:56.474623  420401 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1007 12:44:56.474634  420401 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1007 12:44:56.474643  420401 command_runner.go:130] > #   in $PATH.
	I1007 12:44:56.474652  420401 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1007 12:44:56.474661  420401 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1007 12:44:56.474671  420401 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1007 12:44:56.474679  420401 command_runner.go:130] > #   state.
	I1007 12:44:56.474687  420401 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1007 12:44:56.474698  420401 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1007 12:44:56.474708  420401 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1007 12:44:56.474716  420401 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1007 12:44:56.474729  420401 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1007 12:44:56.474742  420401 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1007 12:44:56.474752  420401 command_runner.go:130] > #   The currently recognized values are:
	I1007 12:44:56.474772  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1007 12:44:56.474792  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1007 12:44:56.474801  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1007 12:44:56.474811  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1007 12:44:56.474822  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1007 12:44:56.474835  420401 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1007 12:44:56.474849  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1007 12:44:56.474858  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1007 12:44:56.474868  420401 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1007 12:44:56.474881  420401 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1007 12:44:56.474890  420401 command_runner.go:130] > #   deprecated option "conmon".
	I1007 12:44:56.474903  420401 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1007 12:44:56.474915  420401 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1007 12:44:56.474928  420401 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1007 12:44:56.474934  420401 command_runner.go:130] > #   should be moved to the container's cgroup
	I1007 12:44:56.474948  420401 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1007 12:44:56.474974  420401 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1007 12:44:56.474990  420401 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1007 12:44:56.475002  420401 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1007 12:44:56.475007  420401 command_runner.go:130] > #
	I1007 12:44:56.475018  420401 command_runner.go:130] > # Using the seccomp notifier feature:
	I1007 12:44:56.475026  420401 command_runner.go:130] > #
	I1007 12:44:56.475036  420401 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1007 12:44:56.475046  420401 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1007 12:44:56.475055  420401 command_runner.go:130] > #
	I1007 12:44:56.475065  420401 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1007 12:44:56.475078  420401 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1007 12:44:56.475085  420401 command_runner.go:130] > #
	I1007 12:44:56.475095  420401 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1007 12:44:56.475104  420401 command_runner.go:130] > # feature.
	I1007 12:44:56.475111  420401 command_runner.go:130] > #
	I1007 12:44:56.475119  420401 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1007 12:44:56.475127  420401 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1007 12:44:56.475136  420401 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1007 12:44:56.475150  420401 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1007 12:44:56.475162  420401 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1007 12:44:56.475168  420401 command_runner.go:130] > #
	I1007 12:44:56.475180  420401 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1007 12:44:56.475188  420401 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1007 12:44:56.475197  420401 command_runner.go:130] > #
	I1007 12:44:56.475206  420401 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1007 12:44:56.475218  420401 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1007 12:44:56.475227  420401 command_runner.go:130] > #
	I1007 12:44:56.475236  420401 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1007 12:44:56.475245  420401 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1007 12:44:56.475256  420401 command_runner.go:130] > # limitation.
	I1007 12:44:56.475264  420401 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1007 12:44:56.475274  420401 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1007 12:44:56.475281  420401 command_runner.go:130] > runtime_type = "oci"
	I1007 12:44:56.475291  420401 command_runner.go:130] > runtime_root = "/run/runc"
	I1007 12:44:56.475297  420401 command_runner.go:130] > runtime_config_path = ""
	I1007 12:44:56.475305  420401 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1007 12:44:56.475309  420401 command_runner.go:130] > monitor_cgroup = "pod"
	I1007 12:44:56.475316  420401 command_runner.go:130] > monitor_exec_cgroup = ""
	I1007 12:44:56.475319  420401 command_runner.go:130] > monitor_env = [
	I1007 12:44:56.475325  420401 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 12:44:56.475330  420401 command_runner.go:130] > ]
	I1007 12:44:56.475335  420401 command_runner.go:130] > privileged_without_host_devices = false
	I1007 12:44:56.475343  420401 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1007 12:44:56.475349  420401 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1007 12:44:56.475357  420401 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1007 12:44:56.475364  420401 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1007 12:44:56.475374  420401 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1007 12:44:56.475382  420401 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1007 12:44:56.475391  420401 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1007 12:44:56.475400  420401 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1007 12:44:56.475411  420401 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1007 12:44:56.475417  420401 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1007 12:44:56.475423  420401 command_runner.go:130] > # Example:
	I1007 12:44:56.475427  420401 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1007 12:44:56.475434  420401 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1007 12:44:56.475439  420401 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1007 12:44:56.475446  420401 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1007 12:44:56.475450  420401 command_runner.go:130] > # cpuset = 0
	I1007 12:44:56.475456  420401 command_runner.go:130] > # cpushares = "0-1"
	I1007 12:44:56.475460  420401 command_runner.go:130] > # Where:
	I1007 12:44:56.475467  420401 command_runner.go:130] > # The workload name is workload-type.
	I1007 12:44:56.475473  420401 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1007 12:44:56.475482  420401 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1007 12:44:56.475491  420401 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1007 12:44:56.475499  420401 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1007 12:44:56.475510  420401 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1007 12:44:56.475514  420401 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1007 12:44:56.475523  420401 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1007 12:44:56.475530  420401 command_runner.go:130] > # Default value is set to true
	I1007 12:44:56.475534  420401 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1007 12:44:56.475542  420401 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1007 12:44:56.475547  420401 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1007 12:44:56.475553  420401 command_runner.go:130] > # Default value is set to 'false'
	I1007 12:44:56.475558  420401 command_runner.go:130] > # disable_hostport_mapping = false
	I1007 12:44:56.475566  420401 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1007 12:44:56.475571  420401 command_runner.go:130] > #
	I1007 12:44:56.475577  420401 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1007 12:44:56.475585  420401 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1007 12:44:56.475591  420401 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1007 12:44:56.475597  420401 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1007 12:44:56.475602  420401 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1007 12:44:56.475606  420401 command_runner.go:130] > [crio.image]
	I1007 12:44:56.475611  420401 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1007 12:44:56.475615  420401 command_runner.go:130] > # default_transport = "docker://"
	I1007 12:44:56.475620  420401 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1007 12:44:56.475627  420401 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1007 12:44:56.475631  420401 command_runner.go:130] > # global_auth_file = ""
	I1007 12:44:56.475635  420401 command_runner.go:130] > # The image used to instantiate infra containers.
	I1007 12:44:56.475640  420401 command_runner.go:130] > # This option supports live configuration reload.
	I1007 12:44:56.475644  420401 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1007 12:44:56.475650  420401 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1007 12:44:56.475655  420401 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1007 12:44:56.475660  420401 command_runner.go:130] > # This option supports live configuration reload.
	I1007 12:44:56.475663  420401 command_runner.go:130] > # pause_image_auth_file = ""
	I1007 12:44:56.475669  420401 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1007 12:44:56.475676  420401 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1007 12:44:56.475681  420401 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1007 12:44:56.475686  420401 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1007 12:44:56.475689  420401 command_runner.go:130] > # pause_command = "/pause"
	I1007 12:44:56.475694  420401 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1007 12:44:56.475700  420401 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1007 12:44:56.475705  420401 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1007 12:44:56.475710  420401 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1007 12:44:56.475716  420401 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1007 12:44:56.475722  420401 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1007 12:44:56.475726  420401 command_runner.go:130] > # pinned_images = [
	I1007 12:44:56.475729  420401 command_runner.go:130] > # ]
	I1007 12:44:56.475734  420401 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1007 12:44:56.475740  420401 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1007 12:44:56.475745  420401 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1007 12:44:56.475750  420401 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1007 12:44:56.475755  420401 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1007 12:44:56.475759  420401 command_runner.go:130] > # signature_policy = ""
	I1007 12:44:56.475764  420401 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1007 12:44:56.475773  420401 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1007 12:44:56.475778  420401 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1007 12:44:56.475787  420401 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1007 12:44:56.475792  420401 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1007 12:44:56.475800  420401 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1007 12:44:56.475809  420401 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1007 12:44:56.475814  420401 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1007 12:44:56.475820  420401 command_runner.go:130] > # changing them here.
	I1007 12:44:56.475825  420401 command_runner.go:130] > # insecure_registries = [
	I1007 12:44:56.475830  420401 command_runner.go:130] > # ]
	I1007 12:44:56.475836  420401 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1007 12:44:56.475841  420401 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1007 12:44:56.475845  420401 command_runner.go:130] > # image_volumes = "mkdir"
	I1007 12:44:56.475851  420401 command_runner.go:130] > # Temporary directory to use for storing big files
	I1007 12:44:56.475858  420401 command_runner.go:130] > # big_files_temporary_dir = ""
	I1007 12:44:56.475864  420401 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1007 12:44:56.475870  420401 command_runner.go:130] > # CNI plugins.
	I1007 12:44:56.475874  420401 command_runner.go:130] > [crio.network]
	I1007 12:44:56.475882  420401 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1007 12:44:56.475887  420401 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1007 12:44:56.475893  420401 command_runner.go:130] > # cni_default_network = ""
	I1007 12:44:56.475899  420401 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1007 12:44:56.475905  420401 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1007 12:44:56.475910  420401 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1007 12:44:56.475917  420401 command_runner.go:130] > # plugin_dirs = [
	I1007 12:44:56.475920  420401 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1007 12:44:56.475925  420401 command_runner.go:130] > # ]
	I1007 12:44:56.475930  420401 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1007 12:44:56.475936  420401 command_runner.go:130] > [crio.metrics]
	I1007 12:44:56.475941  420401 command_runner.go:130] > # Globally enable or disable metrics support.
	I1007 12:44:56.475947  420401 command_runner.go:130] > enable_metrics = true
	I1007 12:44:56.475952  420401 command_runner.go:130] > # Specify enabled metrics collectors.
	I1007 12:44:56.475958  420401 command_runner.go:130] > # Per default all metrics are enabled.
	I1007 12:44:56.475964  420401 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1007 12:44:56.475972  420401 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1007 12:44:56.475977  420401 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1007 12:44:56.475984  420401 command_runner.go:130] > # metrics_collectors = [
	I1007 12:44:56.475988  420401 command_runner.go:130] > # 	"operations",
	I1007 12:44:56.475995  420401 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1007 12:44:56.475999  420401 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1007 12:44:56.476005  420401 command_runner.go:130] > # 	"operations_errors",
	I1007 12:44:56.476009  420401 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1007 12:44:56.476015  420401 command_runner.go:130] > # 	"image_pulls_by_name",
	I1007 12:44:56.476020  420401 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1007 12:44:56.476026  420401 command_runner.go:130] > # 	"image_pulls_failures",
	I1007 12:44:56.476030  420401 command_runner.go:130] > # 	"image_pulls_successes",
	I1007 12:44:56.476037  420401 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1007 12:44:56.476042  420401 command_runner.go:130] > # 	"image_layer_reuse",
	I1007 12:44:56.476049  420401 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1007 12:44:56.476053  420401 command_runner.go:130] > # 	"containers_oom_total",
	I1007 12:44:56.476059  420401 command_runner.go:130] > # 	"containers_oom",
	I1007 12:44:56.476062  420401 command_runner.go:130] > # 	"processes_defunct",
	I1007 12:44:56.476068  420401 command_runner.go:130] > # 	"operations_total",
	I1007 12:44:56.476073  420401 command_runner.go:130] > # 	"operations_latency_seconds",
	I1007 12:44:56.476079  420401 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1007 12:44:56.476083  420401 command_runner.go:130] > # 	"operations_errors_total",
	I1007 12:44:56.476088  420401 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1007 12:44:56.476094  420401 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1007 12:44:56.476098  420401 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1007 12:44:56.476104  420401 command_runner.go:130] > # 	"image_pulls_success_total",
	I1007 12:44:56.476109  420401 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1007 12:44:56.476114  420401 command_runner.go:130] > # 	"containers_oom_count_total",
	I1007 12:44:56.476119  420401 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1007 12:44:56.476126  420401 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1007 12:44:56.476129  420401 command_runner.go:130] > # ]
	I1007 12:44:56.476137  420401 command_runner.go:130] > # The port on which the metrics server will listen.
	I1007 12:44:56.476141  420401 command_runner.go:130] > # metrics_port = 9090
	I1007 12:44:56.476147  420401 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1007 12:44:56.476152  420401 command_runner.go:130] > # metrics_socket = ""
	I1007 12:44:56.476157  420401 command_runner.go:130] > # The certificate for the secure metrics server.
	I1007 12:44:56.476163  420401 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1007 12:44:56.476171  420401 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1007 12:44:56.476178  420401 command_runner.go:130] > # certificate on any modification event.
	I1007 12:44:56.476188  420401 command_runner.go:130] > # metrics_cert = ""
	I1007 12:44:56.476196  420401 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1007 12:44:56.476207  420401 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1007 12:44:56.476212  420401 command_runner.go:130] > # metrics_key = ""
	I1007 12:44:56.476221  420401 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1007 12:44:56.476230  420401 command_runner.go:130] > [crio.tracing]
	I1007 12:44:56.476239  420401 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1007 12:44:56.476249  420401 command_runner.go:130] > # enable_tracing = false
	I1007 12:44:56.476258  420401 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1007 12:44:56.476268  420401 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1007 12:44:56.476275  420401 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1007 12:44:56.476279  420401 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1007 12:44:56.476284  420401 command_runner.go:130] > # CRI-O NRI configuration.
	I1007 12:44:56.476287  420401 command_runner.go:130] > [crio.nri]
	I1007 12:44:56.476292  420401 command_runner.go:130] > # Globally enable or disable NRI.
	I1007 12:44:56.476298  420401 command_runner.go:130] > # enable_nri = false
	I1007 12:44:56.476303  420401 command_runner.go:130] > # NRI socket to listen on.
	I1007 12:44:56.476308  420401 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1007 12:44:56.476312  420401 command_runner.go:130] > # NRI plugin directory to use.
	I1007 12:44:56.476319  420401 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1007 12:44:56.476324  420401 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1007 12:44:56.476330  420401 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1007 12:44:56.476336  420401 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1007 12:44:56.476342  420401 command_runner.go:130] > # nri_disable_connections = false
	I1007 12:44:56.476347  420401 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1007 12:44:56.476353  420401 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1007 12:44:56.476359  420401 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1007 12:44:56.476365  420401 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1007 12:44:56.476373  420401 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1007 12:44:56.476379  420401 command_runner.go:130] > [crio.stats]
	I1007 12:44:56.476384  420401 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1007 12:44:56.476392  420401 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1007 12:44:56.476396  420401 command_runner.go:130] > # stats_collection_period = 0
	I1007 12:44:56.476435  420401 command_runner.go:130] ! time="2024-10-07 12:44:56.428132180Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1007 12:44:56.476448  420401 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1007 12:44:56.476567  420401 cni.go:84] Creating CNI manager for ""
	I1007 12:44:56.476583  420401 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 12:44:56.476591  420401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:44:56.476611  420401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-263097 NodeName:multinode-263097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:44:56.476739  420401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-263097"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:44:56.476798  420401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:44:56.488161  420401 command_runner.go:130] > kubeadm
	I1007 12:44:56.488189  420401 command_runner.go:130] > kubectl
	I1007 12:44:56.488195  420401 command_runner.go:130] > kubelet
	I1007 12:44:56.488225  420401 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:44:56.488292  420401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 12:44:56.498541  420401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1007 12:44:56.516255  420401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:44:56.534011  420401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1007 12:44:56.553366  420401 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I1007 12:44:56.557955  420401 command_runner.go:130] > 192.168.39.213	control-plane.minikube.internal
	I1007 12:44:56.558045  420401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:44:56.707064  420401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:44:56.722857  420401 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097 for IP: 192.168.39.213
	I1007 12:44:56.722887  420401 certs.go:194] generating shared ca certs ...
	I1007 12:44:56.722926  420401 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:44:56.723152  420401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:44:56.723233  420401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:44:56.723261  420401 certs.go:256] generating profile certs ...
	I1007 12:44:56.723371  420401 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/client.key
	I1007 12:44:56.723447  420401 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/apiserver.key.d51ecaf1
	I1007 12:44:56.723495  420401 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/proxy-client.key
	I1007 12:44:56.723525  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:44:56.723546  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:44:56.723569  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:44:56.723589  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:44:56.723611  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:44:56.723632  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:44:56.723649  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:44:56.723669  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:44:56.723736  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:44:56.723779  420401 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:44:56.723793  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:44:56.723831  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:44:56.723874  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:44:56.723905  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:44:56.723961  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:44:56.724000  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:44:56.724016  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:44:56.724035  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:56.724970  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:44:56.751944  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:44:56.778527  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:44:56.804500  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:44:56.830866  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:44:56.857602  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:44:56.884583  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:44:56.911301  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:44:56.938183  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:44:56.965581  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:44:56.991544  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:44:57.017718  420401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:44:57.036054  420401 ssh_runner.go:195] Run: openssl version
	I1007 12:44:57.042013  420401 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1007 12:44:57.042224  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:44:57.053994  420401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:57.058794  420401 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:57.058827  420401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:57.058876  420401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:57.064855  420401 command_runner.go:130] > b5213941
	I1007 12:44:57.064933  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:44:57.075873  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:44:57.089759  420401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:44:57.094849  420401 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:44:57.095016  420401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:44:57.095079  420401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:44:57.101287  420401 command_runner.go:130] > 51391683
	I1007 12:44:57.101369  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:44:57.112672  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:44:57.125741  420401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:44:57.131384  420401 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:44:57.131551  420401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:44:57.131620  420401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:44:57.137589  420401 command_runner.go:130] > 3ec20f2e
	I1007 12:44:57.137864  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:44:57.149703  420401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:44:57.154843  420401 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:44:57.154872  420401 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1007 12:44:57.154877  420401 command_runner.go:130] > Device: 253,1	Inode: 8384040     Links: 1
	I1007 12:44:57.154884  420401 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 12:44:57.154893  420401 command_runner.go:130] > Access: 2024-10-07 12:38:06.689756968 +0000
	I1007 12:44:57.154898  420401 command_runner.go:130] > Modify: 2024-10-07 12:38:06.689756968 +0000
	I1007 12:44:57.154906  420401 command_runner.go:130] > Change: 2024-10-07 12:38:06.689756968 +0000
	I1007 12:44:57.154914  420401 command_runner.go:130] >  Birth: 2024-10-07 12:38:06.689756968 +0000
	I1007 12:44:57.155023  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:44:57.162053  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.162134  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:44:57.169066  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.169157  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:44:57.176034  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.176116  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:44:57.182355  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.182626  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:44:57.189741  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.189949  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:44:57.196149  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.196241  420401 kubeadm.go:392] StartCluster: {Name:multinode-263097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-263097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:44:57.196360  420401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:44:57.196428  420401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:44:57.241095  420401 command_runner.go:130] > 7ba9799f96099d623cb6f05b7f50ab6b884e9a5e917bdee263a6c4eb89260a2b
	I1007 12:44:57.241174  420401 command_runner.go:130] > a4a3b707c2ce9aa809e6b495f0dd6d9d6eb5f9ebb8f247654bc4ded294548ea6
	I1007 12:44:57.241194  420401 command_runner.go:130] > c122775af688fa8e537ad4b037afd7babfee3aa4af5622a7bded4ab7948597ba
	I1007 12:44:57.241348  420401 command_runner.go:130] > abe2e400fd4cc83b7e72e85b3caecc098caf9acd8639db45e2213cd92bde3da1
	I1007 12:44:57.241424  420401 command_runner.go:130] > 30046ab9542693f0a90494b418261b1b770616073cf44f8b610e23f71d4f5e95
	I1007 12:44:57.241476  420401 command_runner.go:130] > a0be5b855d1afd4130d3975abdfe105dde112c5f46bc38dfe30f4d65c54d92ee
	I1007 12:44:57.241529  420401 command_runner.go:130] > faa59e20f08afbd1ea30f61205e39df0670bf192eb384ba421be6325871d088e
	I1007 12:44:57.241620  420401 command_runner.go:130] > a6c0ada6ae97b3bb470ee835db8da6ace7c6948d9afe2d60fd0fd9f2ede257b3
	I1007 12:44:57.243121  420401 cri.go:89] found id: "7ba9799f96099d623cb6f05b7f50ab6b884e9a5e917bdee263a6c4eb89260a2b"
	I1007 12:44:57.243135  420401 cri.go:89] found id: "a4a3b707c2ce9aa809e6b495f0dd6d9d6eb5f9ebb8f247654bc4ded294548ea6"
	I1007 12:44:57.243140  420401 cri.go:89] found id: "c122775af688fa8e537ad4b037afd7babfee3aa4af5622a7bded4ab7948597ba"
	I1007 12:44:57.243144  420401 cri.go:89] found id: "abe2e400fd4cc83b7e72e85b3caecc098caf9acd8639db45e2213cd92bde3da1"
	I1007 12:44:57.243148  420401 cri.go:89] found id: "30046ab9542693f0a90494b418261b1b770616073cf44f8b610e23f71d4f5e95"
	I1007 12:44:57.243153  420401 cri.go:89] found id: "a0be5b855d1afd4130d3975abdfe105dde112c5f46bc38dfe30f4d65c54d92ee"
	I1007 12:44:57.243157  420401 cri.go:89] found id: "faa59e20f08afbd1ea30f61205e39df0670bf192eb384ba421be6325871d088e"
	I1007 12:44:57.243161  420401 cri.go:89] found id: "a6c0ada6ae97b3bb470ee835db8da6ace7c6948d9afe2d60fd0fd9f2ede257b3"
	I1007 12:44:57.243166  420401 cri.go:89] found id: ""
	I1007 12:44:57.243230  420401 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-263097 -n multinode-263097
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-263097 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (332.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-263097 stop: exit status 82 (2m0.495763423s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-263097-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-263097 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-263097 status: (18.752541525s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-263097 status --alsologtostderr: (3.361003086s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-263097 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-263097 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-263097 -n multinode-263097
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-263097 logs -n 25: (2.134368884s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m02:/home/docker/cp-test.txt                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097:/home/docker/cp-test_multinode-263097-m02_multinode-263097.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n multinode-263097 sudo cat                                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /home/docker/cp-test_multinode-263097-m02_multinode-263097.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m02:/home/docker/cp-test.txt                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03:/home/docker/cp-test_multinode-263097-m02_multinode-263097-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n multinode-263097-m03 sudo cat                                   | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /home/docker/cp-test_multinode-263097-m02_multinode-263097-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp testdata/cp-test.txt                                                | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m03:/home/docker/cp-test.txt                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3309803868/001/cp-test_multinode-263097-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m03:/home/docker/cp-test.txt                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097:/home/docker/cp-test_multinode-263097-m03_multinode-263097.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n multinode-263097 sudo cat                                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /home/docker/cp-test_multinode-263097-m03_multinode-263097.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m03:/home/docker/cp-test.txt                       | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m02:/home/docker/cp-test_multinode-263097-m03_multinode-263097-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n multinode-263097-m02 sudo cat                                   | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /home/docker/cp-test_multinode-263097-m03_multinode-263097-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-263097 node stop m03                                                          | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	| node    | multinode-263097 node start                                                             | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:41 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-263097                                                                | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:41 UTC |                     |
	| stop    | -p multinode-263097                                                                     | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:41 UTC |                     |
	| start   | -p multinode-263097                                                                     | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:43 UTC | 07 Oct 24 12:46 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-263097                                                                | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:46 UTC |                     |
	| node    | multinode-263097 node delete                                                            | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:46 UTC | 07 Oct 24 12:46 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-263097 stop                                                                   | multinode-263097 | jenkins | v1.34.0 | 07 Oct 24 12:46 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:43:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:43:14.159864  420401 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:43:14.160148  420401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:43:14.160157  420401 out.go:358] Setting ErrFile to fd 2...
	I1007 12:43:14.160161  420401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:43:14.160377  420401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:43:14.160997  420401 out.go:352] Setting JSON to false
	I1007 12:43:14.162036  420401 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8740,"bootTime":1728296254,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:43:14.162167  420401 start.go:139] virtualization: kvm guest
	I1007 12:43:14.164687  420401 out.go:177] * [multinode-263097] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:43:14.166426  420401 notify.go:220] Checking for updates...
	I1007 12:43:14.166453  420401 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:43:14.168029  420401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:43:14.169597  420401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:43:14.171125  420401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:43:14.172634  420401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:43:14.173948  420401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:43:14.175659  420401 config.go:182] Loaded profile config "multinode-263097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:43:14.175763  420401 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:43:14.176243  420401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:43:14.176317  420401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:43:14.192593  420401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35673
	I1007 12:43:14.193112  420401 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:43:14.193755  420401 main.go:141] libmachine: Using API Version  1
	I1007 12:43:14.193783  420401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:43:14.194191  420401 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:43:14.194484  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:43:14.233958  420401 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 12:43:14.235420  420401 start.go:297] selected driver: kvm2
	I1007 12:43:14.235442  420401 start.go:901] validating driver "kvm2" against &{Name:multinode-263097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:multinode-263097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:43:14.235597  420401 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:43:14.236037  420401 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:43:14.236136  420401 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:43:14.251846  420401 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:43:14.252713  420401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:43:14.252766  420401 cni.go:84] Creating CNI manager for ""
	I1007 12:43:14.252843  420401 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 12:43:14.252920  420401 start.go:340] cluster config:
	{Name:multinode-263097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-263097 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubefl
ow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:43:14.253083  420401 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:43:14.254978  420401 out.go:177] * Starting "multinode-263097" primary control-plane node in "multinode-263097" cluster
	I1007 12:43:14.256363  420401 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:43:14.256400  420401 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 12:43:14.256407  420401 cache.go:56] Caching tarball of preloaded images
	I1007 12:43:14.256558  420401 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 12:43:14.256573  420401 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 12:43:14.256693  420401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/config.json ...
	I1007 12:43:14.256903  420401 start.go:360] acquireMachinesLock for multinode-263097: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:43:14.256954  420401 start.go:364] duration metric: took 32.229µs to acquireMachinesLock for "multinode-263097"
	I1007 12:43:14.256979  420401 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:43:14.256986  420401 fix.go:54] fixHost starting: 
	I1007 12:43:14.257256  420401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:43:14.257289  420401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:43:14.272815  420401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40045
	I1007 12:43:14.273236  420401 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:43:14.273733  420401 main.go:141] libmachine: Using API Version  1
	I1007 12:43:14.273758  420401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:43:14.274129  420401 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:43:14.274288  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:43:14.274445  420401 main.go:141] libmachine: (multinode-263097) Calling .GetState
	I1007 12:43:14.275982  420401 fix.go:112] recreateIfNeeded on multinode-263097: state=Running err=<nil>
	W1007 12:43:14.276002  420401 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:43:14.278038  420401 out.go:177] * Updating the running kvm2 "multinode-263097" VM ...
	I1007 12:43:14.279291  420401 machine.go:93] provisionDockerMachine start ...
	I1007 12:43:14.279317  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:43:14.279545  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.282056  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.282431  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.282479  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.282649  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:43:14.282816  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.283079  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.283231  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:43:14.283393  420401 main.go:141] libmachine: Using SSH client type: native
	I1007 12:43:14.283612  420401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 12:43:14.283625  420401 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:43:14.388784  420401 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-263097
	
	I1007 12:43:14.388822  420401 main.go:141] libmachine: (multinode-263097) Calling .GetMachineName
	I1007 12:43:14.389110  420401 buildroot.go:166] provisioning hostname "multinode-263097"
	I1007 12:43:14.389145  420401 main.go:141] libmachine: (multinode-263097) Calling .GetMachineName
	I1007 12:43:14.389392  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.392145  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.392687  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.392718  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.392834  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:43:14.393005  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.393204  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.393391  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:43:14.393543  420401 main.go:141] libmachine: Using SSH client type: native
	I1007 12:43:14.393722  420401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 12:43:14.393733  420401 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-263097 && echo "multinode-263097" | sudo tee /etc/hostname
	I1007 12:43:14.508900  420401 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-263097
	
	I1007 12:43:14.508930  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.511492  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.511826  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.511855  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.512076  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:43:14.512257  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.512413  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.512531  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:43:14.512685  420401 main.go:141] libmachine: Using SSH client type: native
	I1007 12:43:14.512865  420401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 12:43:14.512880  420401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-263097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-263097/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-263097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:43:14.612152  420401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:43:14.612187  420401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:43:14.612268  420401 buildroot.go:174] setting up certificates
	I1007 12:43:14.612278  420401 provision.go:84] configureAuth start
	I1007 12:43:14.612291  420401 main.go:141] libmachine: (multinode-263097) Calling .GetMachineName
	I1007 12:43:14.612578  420401 main.go:141] libmachine: (multinode-263097) Calling .GetIP
	I1007 12:43:14.615530  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.615948  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.615977  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.616100  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.618406  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.618682  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.618725  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.618833  420401 provision.go:143] copyHostCerts
	I1007 12:43:14.618871  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:43:14.618917  420401 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:43:14.619041  420401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:43:14.619151  420401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:43:14.619292  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:43:14.619321  420401 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:43:14.619331  420401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:43:14.619375  420401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:43:14.619442  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:43:14.619459  420401 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:43:14.619465  420401 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:43:14.619494  420401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:43:14.619556  420401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.multinode-263097 san=[127.0.0.1 192.168.39.213 localhost minikube multinode-263097]
	I1007 12:43:14.807913  420401 provision.go:177] copyRemoteCerts
	I1007 12:43:14.807983  420401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:43:14.808011  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.810757  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.811135  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.811166  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.811339  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:43:14.811526  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.811652  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:43:14.811762  420401 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/multinode-263097/id_rsa Username:docker}
	I1007 12:43:14.895014  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1007 12:43:14.895138  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:43:14.922525  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1007 12:43:14.922614  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1007 12:43:14.949002  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1007 12:43:14.949103  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:43:14.977423  420401 provision.go:87] duration metric: took 365.128305ms to configureAuth
	I1007 12:43:14.977464  420401 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:43:14.977693  420401 config.go:182] Loaded profile config "multinode-263097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:43:14.977776  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:43:14.980869  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.981268  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:43:14.981295  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:43:14.981566  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:43:14.981740  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.981889  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:43:14.982023  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:43:14.982209  420401 main.go:141] libmachine: Using SSH client type: native
	I1007 12:43:14.982390  420401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 12:43:14.982412  420401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:44:45.672644  420401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:44:45.672682  420401 machine.go:96] duration metric: took 1m31.39337225s to provisionDockerMachine
	I1007 12:44:45.672702  420401 start.go:293] postStartSetup for "multinode-263097" (driver="kvm2")
	I1007 12:44:45.672726  420401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:44:45.672777  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:44:45.673149  420401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:44:45.673192  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:44:45.676614  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.677095  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:45.677125  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.677257  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:44:45.677455  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:44:45.677580  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:44:45.677750  420401 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/multinode-263097/id_rsa Username:docker}
	I1007 12:44:45.759356  420401 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:44:45.764202  420401 command_runner.go:130] > NAME=Buildroot
	I1007 12:44:45.764227  420401 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1007 12:44:45.764240  420401 command_runner.go:130] > ID=buildroot
	I1007 12:44:45.764258  420401 command_runner.go:130] > VERSION_ID=2023.02.9
	I1007 12:44:45.764264  420401 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1007 12:44:45.764292  420401 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:44:45.764308  420401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:44:45.764377  420401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:44:45.764449  420401 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:44:45.764476  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /etc/ssl/certs/3842712.pem
	I1007 12:44:45.764563  420401 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:44:45.774741  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:44:45.801707  420401 start.go:296] duration metric: took 128.987277ms for postStartSetup
	I1007 12:44:45.801757  420401 fix.go:56] duration metric: took 1m31.544771096s for fixHost
	I1007 12:44:45.801784  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:44:45.804991  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.805385  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:45.805419  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.805599  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:44:45.805813  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:44:45.805927  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:44:45.806093  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:44:45.806268  420401 main.go:141] libmachine: Using SSH client type: native
	I1007 12:44:45.806492  420401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.213 22 <nil> <nil>}
	I1007 12:44:45.806504  420401 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:44:45.908112  420401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728305085.881718886
	
	I1007 12:44:45.908134  420401 fix.go:216] guest clock: 1728305085.881718886
	I1007 12:44:45.908142  420401 fix.go:229] Guest: 2024-10-07 12:44:45.881718886 +0000 UTC Remote: 2024-10-07 12:44:45.801762257 +0000 UTC m=+91.685549591 (delta=79.956629ms)
	I1007 12:44:45.908188  420401 fix.go:200] guest clock delta is within tolerance: 79.956629ms
	I1007 12:44:45.908197  420401 start.go:83] releasing machines lock for "multinode-263097", held for 1m31.651222907s
	I1007 12:44:45.908225  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:44:45.908459  420401 main.go:141] libmachine: (multinode-263097) Calling .GetIP
	I1007 12:44:45.911342  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.911659  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:45.911685  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.911926  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:44:45.912485  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:44:45.912665  420401 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:44:45.912793  420401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:44:45.912838  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:44:45.912872  420401 ssh_runner.go:195] Run: cat /version.json
	I1007 12:44:45.912895  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:44:45.915733  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.915838  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.916135  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:45.916162  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.916276  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:44:45.916407  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:45.916429  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:45.916416  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:44:45.916592  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:44:45.916595  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:44:45.916787  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:44:45.916802  420401 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/multinode-263097/id_rsa Username:docker}
	I1007 12:44:45.916892  420401 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:44:45.917038  420401 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/multinode-263097/id_rsa Username:docker}
	I1007 12:44:45.988274  420401 command_runner.go:130] > {"iso_version": "v1.34.0-1727108440-19696", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "09d18ff16db81cf1cb24cd6e95f197b54c5f843c"}
	I1007 12:44:45.988452  420401 ssh_runner.go:195] Run: systemctl --version
	I1007 12:44:46.017878  420401 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1007 12:44:46.018640  420401 command_runner.go:130] > systemd 252 (252)
	I1007 12:44:46.018677  420401 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1007 12:44:46.018749  420401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:44:46.183205  420401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 12:44:46.189510  420401 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1007 12:44:46.189558  420401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:44:46.189630  420401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:44:46.199513  420401 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 12:44:46.199552  420401 start.go:495] detecting cgroup driver to use...
	I1007 12:44:46.199664  420401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:44:46.216583  420401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:44:46.231367  420401 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:44:46.231428  420401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:44:46.246225  420401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:44:46.260758  420401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:44:46.411632  420401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:44:46.560393  420401 docker.go:233] disabling docker service ...
	I1007 12:44:46.560497  420401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:44:46.584668  420401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:44:46.601279  420401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:44:46.753673  420401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:44:46.915970  420401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:44:46.931191  420401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:44:46.951574  420401 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1007 12:44:46.951623  420401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 12:44:46.951678  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:46.963117  420401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:44:46.963210  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:46.974629  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:46.985874  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:46.998127  420401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:44:47.010474  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:47.022586  420401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:47.035715  420401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:44:47.048449  420401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:44:47.059510  420401 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1007 12:44:47.059673  420401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:44:47.070900  420401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:44:47.231249  420401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:44:56.198259  420401 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.966956254s)
	I1007 12:44:56.198299  420401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:44:56.198360  420401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:44:56.203586  420401 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1007 12:44:56.203618  420401 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1007 12:44:56.203628  420401 command_runner.go:130] > Device: 0,22	Inode: 1313        Links: 1
	I1007 12:44:56.203638  420401 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 12:44:56.203646  420401 command_runner.go:130] > Access: 2024-10-07 12:44:56.053199551 +0000
	I1007 12:44:56.203655  420401 command_runner.go:130] > Modify: 2024-10-07 12:44:56.053199551 +0000
	I1007 12:44:56.203663  420401 command_runner.go:130] > Change: 2024-10-07 12:44:56.053199551 +0000
	I1007 12:44:56.203668  420401 command_runner.go:130] >  Birth: -
	I1007 12:44:56.203710  420401 start.go:563] Will wait 60s for crictl version
	I1007 12:44:56.203774  420401 ssh_runner.go:195] Run: which crictl
	I1007 12:44:56.207546  420401 command_runner.go:130] > /usr/bin/crictl
	I1007 12:44:56.207782  420401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:44:56.252849  420401 command_runner.go:130] > Version:  0.1.0
	I1007 12:44:56.252877  420401 command_runner.go:130] > RuntimeName:  cri-o
	I1007 12:44:56.252881  420401 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1007 12:44:56.252886  420401 command_runner.go:130] > RuntimeApiVersion:  v1
	I1007 12:44:56.252944  420401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:44:56.253064  420401 ssh_runner.go:195] Run: crio --version
	I1007 12:44:56.285242  420401 command_runner.go:130] > crio version 1.29.1
	I1007 12:44:56.285265  420401 command_runner.go:130] > Version:        1.29.1
	I1007 12:44:56.285271  420401 command_runner.go:130] > GitCommit:      unknown
	I1007 12:44:56.285276  420401 command_runner.go:130] > GitCommitDate:  unknown
	I1007 12:44:56.285280  420401 command_runner.go:130] > GitTreeState:   clean
	I1007 12:44:56.285285  420401 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 12:44:56.285290  420401 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 12:44:56.285294  420401 command_runner.go:130] > Compiler:       gc
	I1007 12:44:56.285298  420401 command_runner.go:130] > Platform:       linux/amd64
	I1007 12:44:56.285302  420401 command_runner.go:130] > Linkmode:       dynamic
	I1007 12:44:56.285307  420401 command_runner.go:130] > BuildTags:      
	I1007 12:44:56.285311  420401 command_runner.go:130] >   containers_image_ostree_stub
	I1007 12:44:56.285315  420401 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 12:44:56.285318  420401 command_runner.go:130] >   btrfs_noversion
	I1007 12:44:56.285325  420401 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 12:44:56.285331  420401 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 12:44:56.285336  420401 command_runner.go:130] >   seccomp
	I1007 12:44:56.285343  420401 command_runner.go:130] > LDFlags:          unknown
	I1007 12:44:56.285350  420401 command_runner.go:130] > SeccompEnabled:   true
	I1007 12:44:56.285360  420401 command_runner.go:130] > AppArmorEnabled:  false
	I1007 12:44:56.285438  420401 ssh_runner.go:195] Run: crio --version
	I1007 12:44:56.321297  420401 command_runner.go:130] > crio version 1.29.1
	I1007 12:44:56.321328  420401 command_runner.go:130] > Version:        1.29.1
	I1007 12:44:56.321337  420401 command_runner.go:130] > GitCommit:      unknown
	I1007 12:44:56.321344  420401 command_runner.go:130] > GitCommitDate:  unknown
	I1007 12:44:56.321350  420401 command_runner.go:130] > GitTreeState:   clean
	I1007 12:44:56.321358  420401 command_runner.go:130] > BuildDate:      2024-09-23T21:42:27Z
	I1007 12:44:56.321365  420401 command_runner.go:130] > GoVersion:      go1.21.6
	I1007 12:44:56.321371  420401 command_runner.go:130] > Compiler:       gc
	I1007 12:44:56.321377  420401 command_runner.go:130] > Platform:       linux/amd64
	I1007 12:44:56.321381  420401 command_runner.go:130] > Linkmode:       dynamic
	I1007 12:44:56.321386  420401 command_runner.go:130] > BuildTags:      
	I1007 12:44:56.321393  420401 command_runner.go:130] >   containers_image_ostree_stub
	I1007 12:44:56.321397  420401 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1007 12:44:56.321401  420401 command_runner.go:130] >   btrfs_noversion
	I1007 12:44:56.321405  420401 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1007 12:44:56.321409  420401 command_runner.go:130] >   libdm_no_deferred_remove
	I1007 12:44:56.321418  420401 command_runner.go:130] >   seccomp
	I1007 12:44:56.321425  420401 command_runner.go:130] > LDFlags:          unknown
	I1007 12:44:56.321429  420401 command_runner.go:130] > SeccompEnabled:   true
	I1007 12:44:56.321434  420401 command_runner.go:130] > AppArmorEnabled:  false
	I1007 12:44:56.324227  420401 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 12:44:56.325702  420401 main.go:141] libmachine: (multinode-263097) Calling .GetIP
	I1007 12:44:56.328455  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:56.328859  420401 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:44:56.328888  420401 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:44:56.329081  420401 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:44:56.333832  420401 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1007 12:44:56.333965  420401 kubeadm.go:883] updating cluster {Name:multinode-263097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:multinode-263097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gad
get:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:44:56.334103  420401 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 12:44:56.334153  420401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:44:56.382039  420401 command_runner.go:130] > {
	I1007 12:44:56.382066  420401 command_runner.go:130] >   "images": [
	I1007 12:44:56.382073  420401 command_runner.go:130] >     {
	I1007 12:44:56.382084  420401 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 12:44:56.382094  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382103  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 12:44:56.382109  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382115  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382131  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 12:44:56.382142  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 12:44:56.382152  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382159  420401 command_runner.go:130] >       "size": "87190579",
	I1007 12:44:56.382167  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.382173  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382186  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382195  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382201  420401 command_runner.go:130] >     },
	I1007 12:44:56.382206  420401 command_runner.go:130] >     {
	I1007 12:44:56.382233  420401 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 12:44:56.382242  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382251  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 12:44:56.382259  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382266  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382281  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 12:44:56.382294  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 12:44:56.382303  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382313  420401 command_runner.go:130] >       "size": "1363676",
	I1007 12:44:56.382322  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.382334  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382343  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382349  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382358  420401 command_runner.go:130] >     },
	I1007 12:44:56.382363  420401 command_runner.go:130] >     {
	I1007 12:44:56.382375  420401 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 12:44:56.382384  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382393  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 12:44:56.382402  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382414  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382428  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 12:44:56.382443  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 12:44:56.382449  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382453  420401 command_runner.go:130] >       "size": "31470524",
	I1007 12:44:56.382462  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.382467  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382473  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382477  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382480  420401 command_runner.go:130] >     },
	I1007 12:44:56.382484  420401 command_runner.go:130] >     {
	I1007 12:44:56.382490  420401 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 12:44:56.382496  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382501  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 12:44:56.382507  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382510  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382518  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 12:44:56.382530  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 12:44:56.382536  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382540  420401 command_runner.go:130] >       "size": "63273227",
	I1007 12:44:56.382546  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.382553  420401 command_runner.go:130] >       "username": "nonroot",
	I1007 12:44:56.382557  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382563  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382566  420401 command_runner.go:130] >     },
	I1007 12:44:56.382570  420401 command_runner.go:130] >     {
	I1007 12:44:56.382580  420401 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 12:44:56.382584  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382589  420401 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 12:44:56.382594  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382598  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382605  420401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 12:44:56.382614  420401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 12:44:56.382620  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382624  420401 command_runner.go:130] >       "size": "149009664",
	I1007 12:44:56.382628  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.382632  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.382635  420401 command_runner.go:130] >       },
	I1007 12:44:56.382639  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382643  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382647  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382650  420401 command_runner.go:130] >     },
	I1007 12:44:56.382654  420401 command_runner.go:130] >     {
	I1007 12:44:56.382660  420401 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 12:44:56.382666  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382671  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 12:44:56.382676  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382680  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382690  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 12:44:56.382699  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 12:44:56.382703  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382710  420401 command_runner.go:130] >       "size": "95237600",
	I1007 12:44:56.382714  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.382719  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.382723  420401 command_runner.go:130] >       },
	I1007 12:44:56.382729  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382732  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382736  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382741  420401 command_runner.go:130] >     },
	I1007 12:44:56.382744  420401 command_runner.go:130] >     {
	I1007 12:44:56.382750  420401 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 12:44:56.382756  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382761  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 12:44:56.382767  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382771  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382780  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 12:44:56.382789  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 12:44:56.382795  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382798  420401 command_runner.go:130] >       "size": "89437508",
	I1007 12:44:56.382802  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.382806  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.382810  420401 command_runner.go:130] >       },
	I1007 12:44:56.382813  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382817  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382821  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382825  420401 command_runner.go:130] >     },
	I1007 12:44:56.382828  420401 command_runner.go:130] >     {
	I1007 12:44:56.382834  420401 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 12:44:56.382840  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382845  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 12:44:56.382848  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382852  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382866  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 12:44:56.382876  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 12:44:56.382880  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382884  420401 command_runner.go:130] >       "size": "92733849",
	I1007 12:44:56.382889  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.382893  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382896  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.382900  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.382903  420401 command_runner.go:130] >     },
	I1007 12:44:56.382906  420401 command_runner.go:130] >     {
	I1007 12:44:56.382912  420401 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 12:44:56.382916  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.382920  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 12:44:56.382924  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382928  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.382935  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 12:44:56.382942  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 12:44:56.382946  420401 command_runner.go:130] >       ],
	I1007 12:44:56.382950  420401 command_runner.go:130] >       "size": "68420934",
	I1007 12:44:56.382953  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.382975  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.382981  420401 command_runner.go:130] >       },
	I1007 12:44:56.382988  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.382994  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.383000  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.383004  420401 command_runner.go:130] >     },
	I1007 12:44:56.383008  420401 command_runner.go:130] >     {
	I1007 12:44:56.383013  420401 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 12:44:56.383017  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.383021  420401 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 12:44:56.383024  420401 command_runner.go:130] >       ],
	I1007 12:44:56.383028  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.383034  420401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 12:44:56.383041  420401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 12:44:56.383045  420401 command_runner.go:130] >       ],
	I1007 12:44:56.383049  420401 command_runner.go:130] >       "size": "742080",
	I1007 12:44:56.383053  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.383056  420401 command_runner.go:130] >         "value": "65535"
	I1007 12:44:56.383060  420401 command_runner.go:130] >       },
	I1007 12:44:56.383063  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.383067  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.383071  420401 command_runner.go:130] >       "pinned": true
	I1007 12:44:56.383074  420401 command_runner.go:130] >     }
	I1007 12:44:56.383078  420401 command_runner.go:130] >   ]
	I1007 12:44:56.383081  420401 command_runner.go:130] > }
	I1007 12:44:56.383315  420401 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:44:56.383332  420401 crio.go:433] Images already preloaded, skipping extraction
	I1007 12:44:56.383385  420401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:44:56.417688  420401 command_runner.go:130] > {
	I1007 12:44:56.417712  420401 command_runner.go:130] >   "images": [
	I1007 12:44:56.417718  420401 command_runner.go:130] >     {
	I1007 12:44:56.417729  420401 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1007 12:44:56.417736  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.417743  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1007 12:44:56.417748  420401 command_runner.go:130] >       ],
	I1007 12:44:56.417754  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.417765  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1007 12:44:56.417776  420401 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1007 12:44:56.417785  420401 command_runner.go:130] >       ],
	I1007 12:44:56.417791  420401 command_runner.go:130] >       "size": "87190579",
	I1007 12:44:56.417798  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.417804  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.417812  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.417816  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.417821  420401 command_runner.go:130] >     },
	I1007 12:44:56.417826  420401 command_runner.go:130] >     {
	I1007 12:44:56.417833  420401 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1007 12:44:56.417837  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.417843  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1007 12:44:56.417849  420401 command_runner.go:130] >       ],
	I1007 12:44:56.417856  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.417872  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1007 12:44:56.417884  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1007 12:44:56.417889  420401 command_runner.go:130] >       ],
	I1007 12:44:56.417894  420401 command_runner.go:130] >       "size": "1363676",
	I1007 12:44:56.417914  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.417926  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.417934  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.417941  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.417945  420401 command_runner.go:130] >     },
	I1007 12:44:56.417949  420401 command_runner.go:130] >     {
	I1007 12:44:56.417955  420401 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1007 12:44:56.417959  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.417967  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1007 12:44:56.417973  420401 command_runner.go:130] >       ],
	I1007 12:44:56.417977  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.417989  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1007 12:44:56.418004  420401 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1007 12:44:56.418013  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418023  420401 command_runner.go:130] >       "size": "31470524",
	I1007 12:44:56.418032  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.418040  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418054  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418061  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418064  420401 command_runner.go:130] >     },
	I1007 12:44:56.418068  420401 command_runner.go:130] >     {
	I1007 12:44:56.418074  420401 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1007 12:44:56.418082  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418090  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1007 12:44:56.418099  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418106  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418120  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1007 12:44:56.418139  420401 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1007 12:44:56.418148  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418156  420401 command_runner.go:130] >       "size": "63273227",
	I1007 12:44:56.418164  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.418172  420401 command_runner.go:130] >       "username": "nonroot",
	I1007 12:44:56.418177  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418185  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418190  420401 command_runner.go:130] >     },
	I1007 12:44:56.418213  420401 command_runner.go:130] >     {
	I1007 12:44:56.418226  420401 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1007 12:44:56.418233  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418243  420401 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1007 12:44:56.418249  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418263  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418272  420401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1007 12:44:56.418284  420401 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1007 12:44:56.418293  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418300  420401 command_runner.go:130] >       "size": "149009664",
	I1007 12:44:56.418310  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.418320  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.418328  420401 command_runner.go:130] >       },
	I1007 12:44:56.418336  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418345  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418354  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418361  420401 command_runner.go:130] >     },
	I1007 12:44:56.418365  420401 command_runner.go:130] >     {
	I1007 12:44:56.418376  420401 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1007 12:44:56.418385  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418396  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1007 12:44:56.418404  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418413  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418427  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1007 12:44:56.418441  420401 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1007 12:44:56.418449  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418454  420401 command_runner.go:130] >       "size": "95237600",
	I1007 12:44:56.418463  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.418469  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.418478  420401 command_runner.go:130] >       },
	I1007 12:44:56.418485  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418493  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418503  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418509  420401 command_runner.go:130] >     },
	I1007 12:44:56.418518  420401 command_runner.go:130] >     {
	I1007 12:44:56.418528  420401 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1007 12:44:56.418538  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418547  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1007 12:44:56.418553  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418563  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418579  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1007 12:44:56.418594  420401 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1007 12:44:56.418603  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418610  420401 command_runner.go:130] >       "size": "89437508",
	I1007 12:44:56.418619  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.418625  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.418633  420401 command_runner.go:130] >       },
	I1007 12:44:56.418638  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418643  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418650  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418658  420401 command_runner.go:130] >     },
	I1007 12:44:56.418663  420401 command_runner.go:130] >     {
	I1007 12:44:56.418673  420401 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1007 12:44:56.418682  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418690  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1007 12:44:56.418698  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418704  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418725  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1007 12:44:56.418738  420401 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1007 12:44:56.418744  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418751  420401 command_runner.go:130] >       "size": "92733849",
	I1007 12:44:56.418758  420401 command_runner.go:130] >       "uid": null,
	I1007 12:44:56.418765  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418774  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418781  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418789  420401 command_runner.go:130] >     },
	I1007 12:44:56.418795  420401 command_runner.go:130] >     {
	I1007 12:44:56.418806  420401 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1007 12:44:56.418813  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418822  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1007 12:44:56.418828  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418835  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.418849  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1007 12:44:56.418864  420401 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1007 12:44:56.418869  420401 command_runner.go:130] >       ],
	I1007 12:44:56.418879  420401 command_runner.go:130] >       "size": "68420934",
	I1007 12:44:56.418889  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.418896  420401 command_runner.go:130] >         "value": "0"
	I1007 12:44:56.418905  420401 command_runner.go:130] >       },
	I1007 12:44:56.418912  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.418921  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.418931  420401 command_runner.go:130] >       "pinned": false
	I1007 12:44:56.418937  420401 command_runner.go:130] >     },
	I1007 12:44:56.418945  420401 command_runner.go:130] >     {
	I1007 12:44:56.418955  420401 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1007 12:44:56.418982  420401 command_runner.go:130] >       "repoTags": [
	I1007 12:44:56.418990  420401 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1007 12:44:56.418996  420401 command_runner.go:130] >       ],
	I1007 12:44:56.419005  420401 command_runner.go:130] >       "repoDigests": [
	I1007 12:44:56.419019  420401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1007 12:44:56.419033  420401 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1007 12:44:56.419039  420401 command_runner.go:130] >       ],
	I1007 12:44:56.419045  420401 command_runner.go:130] >       "size": "742080",
	I1007 12:44:56.419053  420401 command_runner.go:130] >       "uid": {
	I1007 12:44:56.419063  420401 command_runner.go:130] >         "value": "65535"
	I1007 12:44:56.419071  420401 command_runner.go:130] >       },
	I1007 12:44:56.419080  420401 command_runner.go:130] >       "username": "",
	I1007 12:44:56.419089  420401 command_runner.go:130] >       "spec": null,
	I1007 12:44:56.419097  420401 command_runner.go:130] >       "pinned": true
	I1007 12:44:56.419105  420401 command_runner.go:130] >     }
	I1007 12:44:56.419111  420401 command_runner.go:130] >   ]
	I1007 12:44:56.419119  420401 command_runner.go:130] > }
	I1007 12:44:56.419318  420401 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 12:44:56.419335  420401 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:44:56.419344  420401 kubeadm.go:934] updating node { 192.168.39.213 8443 v1.31.1 crio true true} ...
	I1007 12:44:56.419468  420401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-263097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-263097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:44:56.419548  420401 ssh_runner.go:195] Run: crio config
	I1007 12:44:56.464279  420401 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1007 12:44:56.464309  420401 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1007 12:44:56.464316  420401 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1007 12:44:56.464319  420401 command_runner.go:130] > #
	I1007 12:44:56.464334  420401 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1007 12:44:56.464341  420401 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1007 12:44:56.464351  420401 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1007 12:44:56.464366  420401 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1007 12:44:56.464371  420401 command_runner.go:130] > # reload'.
	I1007 12:44:56.464380  420401 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1007 12:44:56.464389  420401 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1007 12:44:56.464398  420401 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1007 12:44:56.464407  420401 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1007 12:44:56.464412  420401 command_runner.go:130] > [crio]
	I1007 12:44:56.464423  420401 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1007 12:44:56.464432  420401 command_runner.go:130] > # containers images, in this directory.
	I1007 12:44:56.464439  420401 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1007 12:44:56.464448  420401 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1007 12:44:56.464453  420401 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1007 12:44:56.464464  420401 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1007 12:44:56.464469  420401 command_runner.go:130] > # imagestore = ""
	I1007 12:44:56.464475  420401 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1007 12:44:56.464481  420401 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1007 12:44:56.464486  420401 command_runner.go:130] > storage_driver = "overlay"
	I1007 12:44:56.464492  420401 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1007 12:44:56.464498  420401 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1007 12:44:56.464502  420401 command_runner.go:130] > storage_option = [
	I1007 12:44:56.464572  420401 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1007 12:44:56.464586  420401 command_runner.go:130] > ]
	I1007 12:44:56.464596  420401 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1007 12:44:56.464602  420401 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1007 12:44:56.464836  420401 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1007 12:44:56.464853  420401 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1007 12:44:56.464863  420401 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1007 12:44:56.464868  420401 command_runner.go:130] > # always happen on a node reboot
	I1007 12:44:56.465129  420401 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1007 12:44:56.465181  420401 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1007 12:44:56.465197  420401 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1007 12:44:56.465204  420401 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1007 12:44:56.465328  420401 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1007 12:44:56.465345  420401 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1007 12:44:56.465358  420401 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1007 12:44:56.465573  420401 command_runner.go:130] > # internal_wipe = true
	I1007 12:44:56.465601  420401 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1007 12:44:56.465612  420401 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1007 12:44:56.465869  420401 command_runner.go:130] > # internal_repair = false
	I1007 12:44:56.465878  420401 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1007 12:44:56.465884  420401 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1007 12:44:56.465889  420401 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1007 12:44:56.466070  420401 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1007 12:44:56.466086  420401 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1007 12:44:56.466093  420401 command_runner.go:130] > [crio.api]
	I1007 12:44:56.466105  420401 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1007 12:44:56.466359  420401 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1007 12:44:56.466373  420401 command_runner.go:130] > # IP address on which the stream server will listen.
	I1007 12:44:56.466543  420401 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1007 12:44:56.466559  420401 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1007 12:44:56.466568  420401 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1007 12:44:56.466765  420401 command_runner.go:130] > # stream_port = "0"
	I1007 12:44:56.466779  420401 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1007 12:44:56.467012  420401 command_runner.go:130] > # stream_enable_tls = false
	I1007 12:44:56.467030  420401 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1007 12:44:56.467202  420401 command_runner.go:130] > # stream_idle_timeout = ""
	I1007 12:44:56.467218  420401 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1007 12:44:56.467229  420401 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1007 12:44:56.467235  420401 command_runner.go:130] > # minutes.
	I1007 12:44:56.467521  420401 command_runner.go:130] > # stream_tls_cert = ""
	I1007 12:44:56.467538  420401 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1007 12:44:56.467547  420401 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1007 12:44:56.467688  420401 command_runner.go:130] > # stream_tls_key = ""
	I1007 12:44:56.467699  420401 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1007 12:44:56.467705  420401 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1007 12:44:56.467720  420401 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1007 12:44:56.467856  420401 command_runner.go:130] > # stream_tls_ca = ""
	I1007 12:44:56.467875  420401 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 12:44:56.468080  420401 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1007 12:44:56.468093  420401 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1007 12:44:56.468193  420401 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1007 12:44:56.468208  420401 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1007 12:44:56.468218  420401 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1007 12:44:56.468227  420401 command_runner.go:130] > [crio.runtime]
	I1007 12:44:56.468234  420401 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1007 12:44:56.468241  420401 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1007 12:44:56.468246  420401 command_runner.go:130] > # "nofile=1024:2048"
	I1007 12:44:56.468277  420401 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1007 12:44:56.468345  420401 command_runner.go:130] > # default_ulimits = [
	I1007 12:44:56.468598  420401 command_runner.go:130] > # ]
	I1007 12:44:56.468613  420401 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1007 12:44:56.468787  420401 command_runner.go:130] > # no_pivot = false
	I1007 12:44:56.468809  420401 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1007 12:44:56.468819  420401 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1007 12:44:56.469035  420401 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1007 12:44:56.469049  420401 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1007 12:44:56.469054  420401 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1007 12:44:56.469073  420401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 12:44:56.469477  420401 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1007 12:44:56.469496  420401 command_runner.go:130] > # Cgroup setting for conmon
	I1007 12:44:56.469507  420401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1007 12:44:56.469632  420401 command_runner.go:130] > conmon_cgroup = "pod"
	I1007 12:44:56.469649  420401 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1007 12:44:56.469657  420401 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1007 12:44:56.469667  420401 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1007 12:44:56.469676  420401 command_runner.go:130] > conmon_env = [
	I1007 12:44:56.469799  420401 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 12:44:56.469812  420401 command_runner.go:130] > ]
	I1007 12:44:56.469821  420401 command_runner.go:130] > # Additional environment variables to set for all the
	I1007 12:44:56.469829  420401 command_runner.go:130] > # containers. These are overridden if set in the
	I1007 12:44:56.469839  420401 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1007 12:44:56.469936  420401 command_runner.go:130] > # default_env = [
	I1007 12:44:56.470072  420401 command_runner.go:130] > # ]
	I1007 12:44:56.470086  420401 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1007 12:44:56.470098  420401 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1007 12:44:56.470399  420401 command_runner.go:130] > # selinux = false
	I1007 12:44:56.470412  420401 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1007 12:44:56.470422  420401 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1007 12:44:56.470432  420401 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1007 12:44:56.470691  420401 command_runner.go:130] > # seccomp_profile = ""
	I1007 12:44:56.470710  420401 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1007 12:44:56.470721  420401 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1007 12:44:56.470731  420401 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1007 12:44:56.470743  420401 command_runner.go:130] > # which might increase security.
	I1007 12:44:56.470751  420401 command_runner.go:130] > # This option is currently deprecated,
	I1007 12:44:56.470763  420401 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1007 12:44:56.470839  420401 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1007 12:44:56.470853  420401 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1007 12:44:56.470863  420401 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1007 12:44:56.470875  420401 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1007 12:44:56.470888  420401 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1007 12:44:56.470904  420401 command_runner.go:130] > # This option supports live configuration reload.
	I1007 12:44:56.471133  420401 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1007 12:44:56.471145  420401 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1007 12:44:56.471152  420401 command_runner.go:130] > # the cgroup blockio controller.
	I1007 12:44:56.471336  420401 command_runner.go:130] > # blockio_config_file = ""
	I1007 12:44:56.471350  420401 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1007 12:44:56.471357  420401 command_runner.go:130] > # blockio parameters.
	I1007 12:44:56.471518  420401 command_runner.go:130] > # blockio_reload = false
	I1007 12:44:56.471531  420401 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1007 12:44:56.471538  420401 command_runner.go:130] > # irqbalance daemon.
	I1007 12:44:56.472773  420401 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1007 12:44:56.472788  420401 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1007 12:44:56.472796  420401 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1007 12:44:56.472802  420401 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1007 12:44:56.472810  420401 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1007 12:44:56.472823  420401 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1007 12:44:56.472838  420401 command_runner.go:130] > # This option supports live configuration reload.
	I1007 12:44:56.472849  420401 command_runner.go:130] > # rdt_config_file = ""
	I1007 12:44:56.472857  420401 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1007 12:44:56.472870  420401 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1007 12:44:56.472889  420401 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1007 12:44:56.472898  420401 command_runner.go:130] > # separate_pull_cgroup = ""
	I1007 12:44:56.472911  420401 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1007 12:44:56.472924  420401 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1007 12:44:56.472934  420401 command_runner.go:130] > # will be added.
	I1007 12:44:56.472941  420401 command_runner.go:130] > # default_capabilities = [
	I1007 12:44:56.472950  420401 command_runner.go:130] > # 	"CHOWN",
	I1007 12:44:56.472959  420401 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1007 12:44:56.472967  420401 command_runner.go:130] > # 	"FSETID",
	I1007 12:44:56.472971  420401 command_runner.go:130] > # 	"FOWNER",
	I1007 12:44:56.472978  420401 command_runner.go:130] > # 	"SETGID",
	I1007 12:44:56.472984  420401 command_runner.go:130] > # 	"SETUID",
	I1007 12:44:56.472993  420401 command_runner.go:130] > # 	"SETPCAP",
	I1007 12:44:56.473002  420401 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1007 12:44:56.473009  420401 command_runner.go:130] > # 	"KILL",
	I1007 12:44:56.473017  420401 command_runner.go:130] > # ]
	I1007 12:44:56.473030  420401 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1007 12:44:56.473043  420401 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1007 12:44:56.473053  420401 command_runner.go:130] > # add_inheritable_capabilities = false
	I1007 12:44:56.473061  420401 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1007 12:44:56.473072  420401 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 12:44:56.473082  420401 command_runner.go:130] > default_sysctls = [
	I1007 12:44:56.473094  420401 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1007 12:44:56.473102  420401 command_runner.go:130] > ]
	I1007 12:44:56.473110  420401 command_runner.go:130] > # List of devices on the host that a
	I1007 12:44:56.473124  420401 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1007 12:44:56.473133  420401 command_runner.go:130] > # allowed_devices = [
	I1007 12:44:56.473141  420401 command_runner.go:130] > # 	"/dev/fuse",
	I1007 12:44:56.473144  420401 command_runner.go:130] > # ]
	I1007 12:44:56.473153  420401 command_runner.go:130] > # List of additional devices. specified as
	I1007 12:44:56.473169  420401 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1007 12:44:56.473180  420401 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1007 12:44:56.473192  420401 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1007 12:44:56.473201  420401 command_runner.go:130] > # additional_devices = [
	I1007 12:44:56.473207  420401 command_runner.go:130] > # ]
	I1007 12:44:56.473218  420401 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1007 12:44:56.473225  420401 command_runner.go:130] > # cdi_spec_dirs = [
	I1007 12:44:56.473229  420401 command_runner.go:130] > # 	"/etc/cdi",
	I1007 12:44:56.473237  420401 command_runner.go:130] > # 	"/var/run/cdi",
	I1007 12:44:56.473246  420401 command_runner.go:130] > # ]
	I1007 12:44:56.473258  420401 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1007 12:44:56.473271  420401 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1007 12:44:56.473280  420401 command_runner.go:130] > # Defaults to false.
	I1007 12:44:56.473291  420401 command_runner.go:130] > # device_ownership_from_security_context = false
	I1007 12:44:56.473304  420401 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1007 12:44:56.473312  420401 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1007 12:44:56.473320  420401 command_runner.go:130] > # hooks_dir = [
	I1007 12:44:56.473331  420401 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1007 12:44:56.473340  420401 command_runner.go:130] > # ]
	I1007 12:44:56.473352  420401 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1007 12:44:56.473365  420401 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1007 12:44:56.473375  420401 command_runner.go:130] > # its default mounts from the following two files:
	I1007 12:44:56.473384  420401 command_runner.go:130] > #
	I1007 12:44:56.473392  420401 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1007 12:44:56.473401  420401 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1007 12:44:56.473409  420401 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1007 12:44:56.473418  420401 command_runner.go:130] > #
	I1007 12:44:56.473428  420401 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1007 12:44:56.473440  420401 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1007 12:44:56.473453  420401 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1007 12:44:56.473460  420401 command_runner.go:130] > #      only add mounts it finds in this file.
	I1007 12:44:56.473468  420401 command_runner.go:130] > #
	I1007 12:44:56.473476  420401 command_runner.go:130] > # default_mounts_file = ""
	I1007 12:44:56.473484  420401 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1007 12:44:56.473514  420401 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1007 12:44:56.473524  420401 command_runner.go:130] > pids_limit = 1024
	I1007 12:44:56.473534  420401 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1007 12:44:56.473544  420401 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1007 12:44:56.473557  420401 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1007 12:44:56.473572  420401 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1007 12:44:56.473581  420401 command_runner.go:130] > # log_size_max = -1
	I1007 12:44:56.473596  420401 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1007 12:44:56.473605  420401 command_runner.go:130] > # log_to_journald = false
	I1007 12:44:56.473618  420401 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1007 12:44:56.473629  420401 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1007 12:44:56.473638  420401 command_runner.go:130] > # Path to directory for container attach sockets.
	I1007 12:44:56.473646  420401 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1007 12:44:56.473654  420401 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1007 12:44:56.473664  420401 command_runner.go:130] > # bind_mount_prefix = ""
	I1007 12:44:56.473676  420401 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1007 12:44:56.473686  420401 command_runner.go:130] > # read_only = false
	I1007 12:44:56.473698  420401 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1007 12:44:56.473709  420401 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1007 12:44:56.473718  420401 command_runner.go:130] > # live configuration reload.
	I1007 12:44:56.473727  420401 command_runner.go:130] > # log_level = "info"
	I1007 12:44:56.473735  420401 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1007 12:44:56.473743  420401 command_runner.go:130] > # This option supports live configuration reload.
	I1007 12:44:56.473754  420401 command_runner.go:130] > # log_filter = ""
	I1007 12:44:56.473766  420401 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1007 12:44:56.473778  420401 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1007 12:44:56.473787  420401 command_runner.go:130] > # separated by comma.
	I1007 12:44:56.473801  420401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 12:44:56.473810  420401 command_runner.go:130] > # uid_mappings = ""
	I1007 12:44:56.473816  420401 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1007 12:44:56.473826  420401 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1007 12:44:56.473837  420401 command_runner.go:130] > # separated by comma.
	I1007 12:44:56.473852  420401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 12:44:56.473861  420401 command_runner.go:130] > # gid_mappings = ""
	I1007 12:44:56.473875  420401 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1007 12:44:56.473888  420401 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 12:44:56.473900  420401 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 12:44:56.473913  420401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 12:44:56.473923  420401 command_runner.go:130] > # minimum_mappable_uid = -1
	I1007 12:44:56.473936  420401 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1007 12:44:56.473950  420401 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1007 12:44:56.473962  420401 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1007 12:44:56.473977  420401 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1007 12:44:56.473984  420401 command_runner.go:130] > # minimum_mappable_gid = -1
	I1007 12:44:56.473990  420401 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1007 12:44:56.474002  420401 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1007 12:44:56.474015  420401 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1007 12:44:56.474024  420401 command_runner.go:130] > # ctr_stop_timeout = 30
	I1007 12:44:56.474036  420401 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1007 12:44:56.474048  420401 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1007 12:44:56.474059  420401 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1007 12:44:56.474067  420401 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1007 12:44:56.474071  420401 command_runner.go:130] > drop_infra_ctr = false
	I1007 12:44:56.474083  420401 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1007 12:44:56.474095  420401 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1007 12:44:56.474109  420401 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1007 12:44:56.474118  420401 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1007 12:44:56.474132  420401 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1007 12:44:56.474144  420401 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1007 12:44:56.474153  420401 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1007 12:44:56.474163  420401 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1007 12:44:56.474173  420401 command_runner.go:130] > # shared_cpuset = ""
	I1007 12:44:56.474185  420401 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1007 12:44:56.474196  420401 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1007 12:44:56.474207  420401 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1007 12:44:56.474217  420401 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1007 12:44:56.474226  420401 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1007 12:44:56.474235  420401 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1007 12:44:56.474242  420401 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1007 12:44:56.474252  420401 command_runner.go:130] > # enable_criu_support = false
	I1007 12:44:56.474263  420401 command_runner.go:130] > # Enable/disable the generation of the container,
	I1007 12:44:56.474276  420401 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1007 12:44:56.474285  420401 command_runner.go:130] > # enable_pod_events = false
	I1007 12:44:56.474298  420401 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 12:44:56.474310  420401 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1007 12:44:56.474319  420401 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1007 12:44:56.474326  420401 command_runner.go:130] > # default_runtime = "runc"
	I1007 12:44:56.474335  420401 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1007 12:44:56.474351  420401 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1007 12:44:56.474369  420401 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1007 12:44:56.474380  420401 command_runner.go:130] > # creation as a file is not desired either.
	I1007 12:44:56.474395  420401 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1007 12:44:56.474404  420401 command_runner.go:130] > # the hostname is being managed dynamically.
	I1007 12:44:56.474409  420401 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1007 12:44:56.474416  420401 command_runner.go:130] > # ]
	I1007 12:44:56.474426  420401 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1007 12:44:56.474439  420401 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1007 12:44:56.474451  420401 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1007 12:44:56.474463  420401 command_runner.go:130] > # Each entry in the table should follow the format:
	I1007 12:44:56.474470  420401 command_runner.go:130] > #
	I1007 12:44:56.474477  420401 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1007 12:44:56.474487  420401 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1007 12:44:56.474521  420401 command_runner.go:130] > # runtime_type = "oci"
	I1007 12:44:56.474533  420401 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1007 12:44:56.474540  420401 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1007 12:44:56.474548  420401 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1007 12:44:56.474555  420401 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1007 12:44:56.474566  420401 command_runner.go:130] > # monitor_env = []
	I1007 12:44:56.474573  420401 command_runner.go:130] > # privileged_without_host_devices = false
	I1007 12:44:56.474580  420401 command_runner.go:130] > # allowed_annotations = []
	I1007 12:44:56.474586  420401 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1007 12:44:56.474593  420401 command_runner.go:130] > # Where:
	I1007 12:44:56.474601  420401 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1007 12:44:56.474612  420401 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1007 12:44:56.474623  420401 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1007 12:44:56.474634  420401 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1007 12:44:56.474643  420401 command_runner.go:130] > #   in $PATH.
	I1007 12:44:56.474652  420401 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1007 12:44:56.474661  420401 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1007 12:44:56.474671  420401 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1007 12:44:56.474679  420401 command_runner.go:130] > #   state.
	I1007 12:44:56.474687  420401 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1007 12:44:56.474698  420401 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1007 12:44:56.474708  420401 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1007 12:44:56.474716  420401 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1007 12:44:56.474729  420401 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1007 12:44:56.474742  420401 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1007 12:44:56.474752  420401 command_runner.go:130] > #   The currently recognized values are:
	I1007 12:44:56.474772  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1007 12:44:56.474792  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1007 12:44:56.474801  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1007 12:44:56.474811  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1007 12:44:56.474822  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1007 12:44:56.474835  420401 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1007 12:44:56.474849  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1007 12:44:56.474858  420401 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1007 12:44:56.474868  420401 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1007 12:44:56.474881  420401 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1007 12:44:56.474890  420401 command_runner.go:130] > #   deprecated option "conmon".
	I1007 12:44:56.474903  420401 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1007 12:44:56.474915  420401 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1007 12:44:56.474928  420401 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1007 12:44:56.474934  420401 command_runner.go:130] > #   should be moved to the container's cgroup
	I1007 12:44:56.474948  420401 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1007 12:44:56.474974  420401 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1007 12:44:56.474990  420401 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1007 12:44:56.475002  420401 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1007 12:44:56.475007  420401 command_runner.go:130] > #
	I1007 12:44:56.475018  420401 command_runner.go:130] > # Using the seccomp notifier feature:
	I1007 12:44:56.475026  420401 command_runner.go:130] > #
	I1007 12:44:56.475036  420401 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1007 12:44:56.475046  420401 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1007 12:44:56.475055  420401 command_runner.go:130] > #
	I1007 12:44:56.475065  420401 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1007 12:44:56.475078  420401 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1007 12:44:56.475085  420401 command_runner.go:130] > #
	I1007 12:44:56.475095  420401 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1007 12:44:56.475104  420401 command_runner.go:130] > # feature.
	I1007 12:44:56.475111  420401 command_runner.go:130] > #
	I1007 12:44:56.475119  420401 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1007 12:44:56.475127  420401 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1007 12:44:56.475136  420401 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1007 12:44:56.475150  420401 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1007 12:44:56.475162  420401 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1007 12:44:56.475168  420401 command_runner.go:130] > #
	I1007 12:44:56.475180  420401 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1007 12:44:56.475188  420401 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1007 12:44:56.475197  420401 command_runner.go:130] > #
	I1007 12:44:56.475206  420401 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1007 12:44:56.475218  420401 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1007 12:44:56.475227  420401 command_runner.go:130] > #
	I1007 12:44:56.475236  420401 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1007 12:44:56.475245  420401 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1007 12:44:56.475256  420401 command_runner.go:130] > # limitation.
	I1007 12:44:56.475264  420401 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1007 12:44:56.475274  420401 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1007 12:44:56.475281  420401 command_runner.go:130] > runtime_type = "oci"
	I1007 12:44:56.475291  420401 command_runner.go:130] > runtime_root = "/run/runc"
	I1007 12:44:56.475297  420401 command_runner.go:130] > runtime_config_path = ""
	I1007 12:44:56.475305  420401 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1007 12:44:56.475309  420401 command_runner.go:130] > monitor_cgroup = "pod"
	I1007 12:44:56.475316  420401 command_runner.go:130] > monitor_exec_cgroup = ""
	I1007 12:44:56.475319  420401 command_runner.go:130] > monitor_env = [
	I1007 12:44:56.475325  420401 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1007 12:44:56.475330  420401 command_runner.go:130] > ]
	I1007 12:44:56.475335  420401 command_runner.go:130] > privileged_without_host_devices = false
	I1007 12:44:56.475343  420401 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1007 12:44:56.475349  420401 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1007 12:44:56.475357  420401 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1007 12:44:56.475364  420401 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1007 12:44:56.475374  420401 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1007 12:44:56.475382  420401 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1007 12:44:56.475391  420401 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1007 12:44:56.475400  420401 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1007 12:44:56.475411  420401 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1007 12:44:56.475417  420401 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1007 12:44:56.475423  420401 command_runner.go:130] > # Example:
	I1007 12:44:56.475427  420401 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1007 12:44:56.475434  420401 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1007 12:44:56.475439  420401 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1007 12:44:56.475446  420401 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1007 12:44:56.475450  420401 command_runner.go:130] > # cpuset = 0
	I1007 12:44:56.475456  420401 command_runner.go:130] > # cpushares = "0-1"
	I1007 12:44:56.475460  420401 command_runner.go:130] > # Where:
	I1007 12:44:56.475467  420401 command_runner.go:130] > # The workload name is workload-type.
	I1007 12:44:56.475473  420401 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1007 12:44:56.475482  420401 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1007 12:44:56.475491  420401 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1007 12:44:56.475499  420401 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1007 12:44:56.475510  420401 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1007 12:44:56.475514  420401 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1007 12:44:56.475523  420401 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1007 12:44:56.475530  420401 command_runner.go:130] > # Default value is set to true
	I1007 12:44:56.475534  420401 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1007 12:44:56.475542  420401 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1007 12:44:56.475547  420401 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1007 12:44:56.475553  420401 command_runner.go:130] > # Default value is set to 'false'
	I1007 12:44:56.475558  420401 command_runner.go:130] > # disable_hostport_mapping = false
	I1007 12:44:56.475566  420401 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1007 12:44:56.475571  420401 command_runner.go:130] > #
	I1007 12:44:56.475577  420401 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1007 12:44:56.475585  420401 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1007 12:44:56.475591  420401 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1007 12:44:56.475597  420401 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1007 12:44:56.475602  420401 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1007 12:44:56.475606  420401 command_runner.go:130] > [crio.image]
	I1007 12:44:56.475611  420401 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1007 12:44:56.475615  420401 command_runner.go:130] > # default_transport = "docker://"
	I1007 12:44:56.475620  420401 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1007 12:44:56.475627  420401 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1007 12:44:56.475631  420401 command_runner.go:130] > # global_auth_file = ""
	I1007 12:44:56.475635  420401 command_runner.go:130] > # The image used to instantiate infra containers.
	I1007 12:44:56.475640  420401 command_runner.go:130] > # This option supports live configuration reload.
	I1007 12:44:56.475644  420401 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1007 12:44:56.475650  420401 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1007 12:44:56.475655  420401 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1007 12:44:56.475660  420401 command_runner.go:130] > # This option supports live configuration reload.
	I1007 12:44:56.475663  420401 command_runner.go:130] > # pause_image_auth_file = ""
	I1007 12:44:56.475669  420401 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1007 12:44:56.475676  420401 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1007 12:44:56.475681  420401 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1007 12:44:56.475686  420401 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1007 12:44:56.475689  420401 command_runner.go:130] > # pause_command = "/pause"
	I1007 12:44:56.475694  420401 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1007 12:44:56.475700  420401 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1007 12:44:56.475705  420401 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1007 12:44:56.475710  420401 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1007 12:44:56.475716  420401 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1007 12:44:56.475722  420401 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1007 12:44:56.475726  420401 command_runner.go:130] > # pinned_images = [
	I1007 12:44:56.475729  420401 command_runner.go:130] > # ]
	I1007 12:44:56.475734  420401 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1007 12:44:56.475740  420401 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1007 12:44:56.475745  420401 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1007 12:44:56.475750  420401 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1007 12:44:56.475755  420401 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1007 12:44:56.475759  420401 command_runner.go:130] > # signature_policy = ""
	I1007 12:44:56.475764  420401 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1007 12:44:56.475773  420401 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1007 12:44:56.475778  420401 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1007 12:44:56.475787  420401 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1007 12:44:56.475792  420401 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1007 12:44:56.475800  420401 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1007 12:44:56.475809  420401 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1007 12:44:56.475814  420401 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1007 12:44:56.475820  420401 command_runner.go:130] > # changing them here.
	I1007 12:44:56.475825  420401 command_runner.go:130] > # insecure_registries = [
	I1007 12:44:56.475830  420401 command_runner.go:130] > # ]
	I1007 12:44:56.475836  420401 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1007 12:44:56.475841  420401 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1007 12:44:56.475845  420401 command_runner.go:130] > # image_volumes = "mkdir"
	I1007 12:44:56.475851  420401 command_runner.go:130] > # Temporary directory to use for storing big files
	I1007 12:44:56.475858  420401 command_runner.go:130] > # big_files_temporary_dir = ""
	I1007 12:44:56.475864  420401 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1007 12:44:56.475870  420401 command_runner.go:130] > # CNI plugins.
	I1007 12:44:56.475874  420401 command_runner.go:130] > [crio.network]
	I1007 12:44:56.475882  420401 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1007 12:44:56.475887  420401 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1007 12:44:56.475893  420401 command_runner.go:130] > # cni_default_network = ""
	I1007 12:44:56.475899  420401 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1007 12:44:56.475905  420401 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1007 12:44:56.475910  420401 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1007 12:44:56.475917  420401 command_runner.go:130] > # plugin_dirs = [
	I1007 12:44:56.475920  420401 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1007 12:44:56.475925  420401 command_runner.go:130] > # ]
	I1007 12:44:56.475930  420401 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1007 12:44:56.475936  420401 command_runner.go:130] > [crio.metrics]
	I1007 12:44:56.475941  420401 command_runner.go:130] > # Globally enable or disable metrics support.
	I1007 12:44:56.475947  420401 command_runner.go:130] > enable_metrics = true
	I1007 12:44:56.475952  420401 command_runner.go:130] > # Specify enabled metrics collectors.
	I1007 12:44:56.475958  420401 command_runner.go:130] > # Per default all metrics are enabled.
	I1007 12:44:56.475964  420401 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1007 12:44:56.475972  420401 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1007 12:44:56.475977  420401 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1007 12:44:56.475984  420401 command_runner.go:130] > # metrics_collectors = [
	I1007 12:44:56.475988  420401 command_runner.go:130] > # 	"operations",
	I1007 12:44:56.475995  420401 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1007 12:44:56.475999  420401 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1007 12:44:56.476005  420401 command_runner.go:130] > # 	"operations_errors",
	I1007 12:44:56.476009  420401 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1007 12:44:56.476015  420401 command_runner.go:130] > # 	"image_pulls_by_name",
	I1007 12:44:56.476020  420401 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1007 12:44:56.476026  420401 command_runner.go:130] > # 	"image_pulls_failures",
	I1007 12:44:56.476030  420401 command_runner.go:130] > # 	"image_pulls_successes",
	I1007 12:44:56.476037  420401 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1007 12:44:56.476042  420401 command_runner.go:130] > # 	"image_layer_reuse",
	I1007 12:44:56.476049  420401 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1007 12:44:56.476053  420401 command_runner.go:130] > # 	"containers_oom_total",
	I1007 12:44:56.476059  420401 command_runner.go:130] > # 	"containers_oom",
	I1007 12:44:56.476062  420401 command_runner.go:130] > # 	"processes_defunct",
	I1007 12:44:56.476068  420401 command_runner.go:130] > # 	"operations_total",
	I1007 12:44:56.476073  420401 command_runner.go:130] > # 	"operations_latency_seconds",
	I1007 12:44:56.476079  420401 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1007 12:44:56.476083  420401 command_runner.go:130] > # 	"operations_errors_total",
	I1007 12:44:56.476088  420401 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1007 12:44:56.476094  420401 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1007 12:44:56.476098  420401 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1007 12:44:56.476104  420401 command_runner.go:130] > # 	"image_pulls_success_total",
	I1007 12:44:56.476109  420401 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1007 12:44:56.476114  420401 command_runner.go:130] > # 	"containers_oom_count_total",
	I1007 12:44:56.476119  420401 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1007 12:44:56.476126  420401 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1007 12:44:56.476129  420401 command_runner.go:130] > # ]
	I1007 12:44:56.476137  420401 command_runner.go:130] > # The port on which the metrics server will listen.
	I1007 12:44:56.476141  420401 command_runner.go:130] > # metrics_port = 9090
	I1007 12:44:56.476147  420401 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1007 12:44:56.476152  420401 command_runner.go:130] > # metrics_socket = ""
	I1007 12:44:56.476157  420401 command_runner.go:130] > # The certificate for the secure metrics server.
	I1007 12:44:56.476163  420401 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1007 12:44:56.476171  420401 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1007 12:44:56.476178  420401 command_runner.go:130] > # certificate on any modification event.
	I1007 12:44:56.476188  420401 command_runner.go:130] > # metrics_cert = ""
	I1007 12:44:56.476196  420401 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1007 12:44:56.476207  420401 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1007 12:44:56.476212  420401 command_runner.go:130] > # metrics_key = ""
	I1007 12:44:56.476221  420401 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1007 12:44:56.476230  420401 command_runner.go:130] > [crio.tracing]
	I1007 12:44:56.476239  420401 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1007 12:44:56.476249  420401 command_runner.go:130] > # enable_tracing = false
	I1007 12:44:56.476258  420401 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1007 12:44:56.476268  420401 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1007 12:44:56.476275  420401 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1007 12:44:56.476279  420401 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1007 12:44:56.476284  420401 command_runner.go:130] > # CRI-O NRI configuration.
	I1007 12:44:56.476287  420401 command_runner.go:130] > [crio.nri]
	I1007 12:44:56.476292  420401 command_runner.go:130] > # Globally enable or disable NRI.
	I1007 12:44:56.476298  420401 command_runner.go:130] > # enable_nri = false
	I1007 12:44:56.476303  420401 command_runner.go:130] > # NRI socket to listen on.
	I1007 12:44:56.476308  420401 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1007 12:44:56.476312  420401 command_runner.go:130] > # NRI plugin directory to use.
	I1007 12:44:56.476319  420401 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1007 12:44:56.476324  420401 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1007 12:44:56.476330  420401 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1007 12:44:56.476336  420401 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1007 12:44:56.476342  420401 command_runner.go:130] > # nri_disable_connections = false
	I1007 12:44:56.476347  420401 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1007 12:44:56.476353  420401 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1007 12:44:56.476359  420401 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1007 12:44:56.476365  420401 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1007 12:44:56.476373  420401 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1007 12:44:56.476379  420401 command_runner.go:130] > [crio.stats]
	I1007 12:44:56.476384  420401 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1007 12:44:56.476392  420401 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1007 12:44:56.476396  420401 command_runner.go:130] > # stats_collection_period = 0
	I1007 12:44:56.476435  420401 command_runner.go:130] ! time="2024-10-07 12:44:56.428132180Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1007 12:44:56.476448  420401 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1007 12:44:56.476567  420401 cni.go:84] Creating CNI manager for ""
	I1007 12:44:56.476583  420401 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1007 12:44:56.476591  420401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:44:56.476611  420401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.213 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-263097 NodeName:multinode-263097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:44:56.476739  420401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-263097"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:44:56.476798  420401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:44:56.488161  420401 command_runner.go:130] > kubeadm
	I1007 12:44:56.488189  420401 command_runner.go:130] > kubectl
	I1007 12:44:56.488195  420401 command_runner.go:130] > kubelet
	I1007 12:44:56.488225  420401 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:44:56.488292  420401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 12:44:56.498541  420401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1007 12:44:56.516255  420401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:44:56.534011  420401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1007 12:44:56.553366  420401 ssh_runner.go:195] Run: grep 192.168.39.213	control-plane.minikube.internal$ /etc/hosts
	I1007 12:44:56.557955  420401 command_runner.go:130] > 192.168.39.213	control-plane.minikube.internal
	I1007 12:44:56.558045  420401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:44:56.707064  420401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:44:56.722857  420401 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097 for IP: 192.168.39.213
	I1007 12:44:56.722887  420401 certs.go:194] generating shared ca certs ...
	I1007 12:44:56.722926  420401 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:44:56.723152  420401 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:44:56.723233  420401 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:44:56.723261  420401 certs.go:256] generating profile certs ...
	I1007 12:44:56.723371  420401 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/client.key
	I1007 12:44:56.723447  420401 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/apiserver.key.d51ecaf1
	I1007 12:44:56.723495  420401 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/proxy-client.key
	I1007 12:44:56.723525  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1007 12:44:56.723546  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1007 12:44:56.723569  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1007 12:44:56.723589  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1007 12:44:56.723611  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1007 12:44:56.723632  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1007 12:44:56.723649  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1007 12:44:56.723669  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1007 12:44:56.723736  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:44:56.723779  420401 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:44:56.723793  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:44:56.723831  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:44:56.723874  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:44:56.723905  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:44:56.723961  420401 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:44:56.724000  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem -> /usr/share/ca-certificates/384271.pem
	I1007 12:44:56.724016  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> /usr/share/ca-certificates/3842712.pem
	I1007 12:44:56.724035  420401 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:56.724970  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:44:56.751944  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:44:56.778527  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:44:56.804500  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:44:56.830866  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 12:44:56.857602  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:44:56.884583  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:44:56.911301  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/multinode-263097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:44:56.938183  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:44:56.965581  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:44:56.991544  420401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:44:57.017718  420401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:44:57.036054  420401 ssh_runner.go:195] Run: openssl version
	I1007 12:44:57.042013  420401 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1007 12:44:57.042224  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:44:57.053994  420401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:57.058794  420401 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:57.058827  420401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:57.058876  420401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:44:57.064855  420401 command_runner.go:130] > b5213941
	I1007 12:44:57.064933  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:44:57.075873  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:44:57.089759  420401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:44:57.094849  420401 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:44:57.095016  420401 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:44:57.095079  420401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:44:57.101287  420401 command_runner.go:130] > 51391683
	I1007 12:44:57.101369  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:44:57.112672  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:44:57.125741  420401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:44:57.131384  420401 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:44:57.131551  420401 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:44:57.131620  420401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:44:57.137589  420401 command_runner.go:130] > 3ec20f2e
	I1007 12:44:57.137864  420401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:44:57.149703  420401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:44:57.154843  420401 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:44:57.154872  420401 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1007 12:44:57.154877  420401 command_runner.go:130] > Device: 253,1	Inode: 8384040     Links: 1
	I1007 12:44:57.154884  420401 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1007 12:44:57.154893  420401 command_runner.go:130] > Access: 2024-10-07 12:38:06.689756968 +0000
	I1007 12:44:57.154898  420401 command_runner.go:130] > Modify: 2024-10-07 12:38:06.689756968 +0000
	I1007 12:44:57.154906  420401 command_runner.go:130] > Change: 2024-10-07 12:38:06.689756968 +0000
	I1007 12:44:57.154914  420401 command_runner.go:130] >  Birth: 2024-10-07 12:38:06.689756968 +0000
	I1007 12:44:57.155023  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:44:57.162053  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.162134  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:44:57.169066  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.169157  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:44:57.176034  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.176116  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:44:57.182355  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.182626  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:44:57.189741  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.189949  420401 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:44:57.196149  420401 command_runner.go:130] > Certificate will not expire
	I1007 12:44:57.196241  420401 kubeadm.go:392] StartCluster: {Name:multinode-263097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:multinode-263097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:44:57.196360  420401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:44:57.196428  420401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:44:57.241095  420401 command_runner.go:130] > 7ba9799f96099d623cb6f05b7f50ab6b884e9a5e917bdee263a6c4eb89260a2b
	I1007 12:44:57.241174  420401 command_runner.go:130] > a4a3b707c2ce9aa809e6b495f0dd6d9d6eb5f9ebb8f247654bc4ded294548ea6
	I1007 12:44:57.241194  420401 command_runner.go:130] > c122775af688fa8e537ad4b037afd7babfee3aa4af5622a7bded4ab7948597ba
	I1007 12:44:57.241348  420401 command_runner.go:130] > abe2e400fd4cc83b7e72e85b3caecc098caf9acd8639db45e2213cd92bde3da1
	I1007 12:44:57.241424  420401 command_runner.go:130] > 30046ab9542693f0a90494b418261b1b770616073cf44f8b610e23f71d4f5e95
	I1007 12:44:57.241476  420401 command_runner.go:130] > a0be5b855d1afd4130d3975abdfe105dde112c5f46bc38dfe30f4d65c54d92ee
	I1007 12:44:57.241529  420401 command_runner.go:130] > faa59e20f08afbd1ea30f61205e39df0670bf192eb384ba421be6325871d088e
	I1007 12:44:57.241620  420401 command_runner.go:130] > a6c0ada6ae97b3bb470ee835db8da6ace7c6948d9afe2d60fd0fd9f2ede257b3
	I1007 12:44:57.243121  420401 cri.go:89] found id: "7ba9799f96099d623cb6f05b7f50ab6b884e9a5e917bdee263a6c4eb89260a2b"
	I1007 12:44:57.243135  420401 cri.go:89] found id: "a4a3b707c2ce9aa809e6b495f0dd6d9d6eb5f9ebb8f247654bc4ded294548ea6"
	I1007 12:44:57.243140  420401 cri.go:89] found id: "c122775af688fa8e537ad4b037afd7babfee3aa4af5622a7bded4ab7948597ba"
	I1007 12:44:57.243144  420401 cri.go:89] found id: "abe2e400fd4cc83b7e72e85b3caecc098caf9acd8639db45e2213cd92bde3da1"
	I1007 12:44:57.243148  420401 cri.go:89] found id: "30046ab9542693f0a90494b418261b1b770616073cf44f8b610e23f71d4f5e95"
	I1007 12:44:57.243153  420401 cri.go:89] found id: "a0be5b855d1afd4130d3975abdfe105dde112c5f46bc38dfe30f4d65c54d92ee"
	I1007 12:44:57.243157  420401 cri.go:89] found id: "faa59e20f08afbd1ea30f61205e39df0670bf192eb384ba421be6325871d088e"
	I1007 12:44:57.243161  420401 cri.go:89] found id: "a6c0ada6ae97b3bb470ee835db8da6ace7c6948d9afe2d60fd0fd9f2ede257b3"
	I1007 12:44:57.243166  420401 cri.go:89] found id: ""
	I1007 12:44:57.243230  420401 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-263097 -n multinode-263097
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-263097 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.38s)

                                                
                                    
x
+
TestPreload (157.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-474339 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-474339 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m28.405184942s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-474339 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-474339 image pull gcr.io/k8s-minikube/busybox: (1.337976657s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-474339
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-474339: (7.314750792s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-474339 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1007 12:55:01.380197  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-474339 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (57.320048317s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-474339 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-10-07 12:55:55.515618314 +0000 UTC m=+5074.134202320
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-474339 -n test-preload-474339
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-474339 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-474339 logs -n 25: (1.227039318s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n multinode-263097 sudo cat                                       | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /home/docker/cp-test_multinode-263097-m03_multinode-263097.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-263097 cp multinode-263097-m03:/home/docker/cp-test.txt                       | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m02:/home/docker/cp-test_multinode-263097-m03_multinode-263097-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n                                                                 | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | multinode-263097-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-263097 ssh -n multinode-263097-m02 sudo cat                                   | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	|         | /home/docker/cp-test_multinode-263097-m03_multinode-263097-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-263097 node stop m03                                                          | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:40 UTC |
	| node    | multinode-263097 node start                                                             | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:40 UTC | 07 Oct 24 12:41 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-263097                                                                | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:41 UTC |                     |
	| stop    | -p multinode-263097                                                                     | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:41 UTC |                     |
	| start   | -p multinode-263097                                                                     | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:43 UTC | 07 Oct 24 12:46 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-263097                                                                | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:46 UTC |                     |
	| node    | multinode-263097 node delete                                                            | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:46 UTC | 07 Oct 24 12:46 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-263097 stop                                                                   | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:46 UTC |                     |
	| start   | -p multinode-263097                                                                     | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:49 UTC | 07 Oct 24 12:52 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-263097                                                                | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:52 UTC |                     |
	| start   | -p multinode-263097-m02                                                                 | multinode-263097-m02 | jenkins | v1.34.0 | 07 Oct 24 12:52 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-263097-m03                                                                 | multinode-263097-m03 | jenkins | v1.34.0 | 07 Oct 24 12:52 UTC | 07 Oct 24 12:53 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-263097                                                                 | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:53 UTC |                     |
	| delete  | -p multinode-263097-m03                                                                 | multinode-263097-m03 | jenkins | v1.34.0 | 07 Oct 24 12:53 UTC | 07 Oct 24 12:53 UTC |
	| delete  | -p multinode-263097                                                                     | multinode-263097     | jenkins | v1.34.0 | 07 Oct 24 12:53 UTC | 07 Oct 24 12:53 UTC |
	| start   | -p test-preload-474339                                                                  | test-preload-474339  | jenkins | v1.34.0 | 07 Oct 24 12:53 UTC | 07 Oct 24 12:54 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-474339 image pull                                                          | test-preload-474339  | jenkins | v1.34.0 | 07 Oct 24 12:54 UTC | 07 Oct 24 12:54 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-474339                                                                  | test-preload-474339  | jenkins | v1.34.0 | 07 Oct 24 12:54 UTC | 07 Oct 24 12:54 UTC |
	| start   | -p test-preload-474339                                                                  | test-preload-474339  | jenkins | v1.34.0 | 07 Oct 24 12:54 UTC | 07 Oct 24 12:55 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-474339 image list                                                          | test-preload-474339  | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:55 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:54:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:54:57.971952  424792 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:54:57.972220  424792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:54:57.972228  424792 out.go:358] Setting ErrFile to fd 2...
	I1007 12:54:57.972233  424792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:54:57.972420  424792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:54:57.972997  424792 out.go:352] Setting JSON to false
	I1007 12:54:57.973999  424792 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9444,"bootTime":1728296254,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 12:54:57.974060  424792 start.go:139] virtualization: kvm guest
	I1007 12:54:57.976612  424792 out.go:177] * [test-preload-474339] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 12:54:57.978110  424792 notify.go:220] Checking for updates...
	I1007 12:54:57.979654  424792 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 12:54:57.981421  424792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:54:57.983302  424792 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:54:57.985179  424792 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 12:54:57.986707  424792 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 12:54:57.988181  424792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:54:57.989958  424792 config.go:182] Loaded profile config "test-preload-474339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1007 12:54:57.990332  424792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:54:57.990384  424792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:54:58.005695  424792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41411
	I1007 12:54:58.006251  424792 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:54:58.006827  424792 main.go:141] libmachine: Using API Version  1
	I1007 12:54:58.006855  424792 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:54:58.007334  424792 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:54:58.007552  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	I1007 12:54:58.009762  424792 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 12:54:58.011348  424792 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:54:58.011678  424792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:54:58.011725  424792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:54:58.027452  424792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34267
	I1007 12:54:58.027908  424792 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:54:58.028450  424792 main.go:141] libmachine: Using API Version  1
	I1007 12:54:58.028477  424792 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:54:58.028800  424792 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:54:58.029013  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	I1007 12:54:58.067148  424792 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 12:54:58.068643  424792 start.go:297] selected driver: kvm2
	I1007 12:54:58.068667  424792 start.go:901] validating driver "kvm2" against &{Name:test-preload-474339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.24.4 ClusterName:test-preload-474339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:54:58.068788  424792 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:54:58.069551  424792 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:54:58.069653  424792 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 12:54:58.086409  424792 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 12:54:58.086832  424792 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:54:58.086869  424792 cni.go:84] Creating CNI manager for ""
	I1007 12:54:58.086916  424792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:54:58.087057  424792 start.go:340] cluster config:
	{Name:test-preload-474339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-474339 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:54:58.087214  424792 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:54:58.089612  424792 out.go:177] * Starting "test-preload-474339" primary control-plane node in "test-preload-474339" cluster
	I1007 12:54:58.091264  424792 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1007 12:54:58.117479  424792 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1007 12:54:58.117520  424792 cache.go:56] Caching tarball of preloaded images
	I1007 12:54:58.117696  424792 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1007 12:54:58.119684  424792 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1007 12:54:58.121308  424792 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1007 12:54:58.150012  424792 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1007 12:55:00.593047  424792 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1007 12:55:00.593147  424792 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1007 12:55:01.462214  424792 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1007 12:55:01.462348  424792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/config.json ...
	I1007 12:55:01.462589  424792 start.go:360] acquireMachinesLock for test-preload-474339: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 12:55:01.462658  424792 start.go:364] duration metric: took 46.726µs to acquireMachinesLock for "test-preload-474339"
	I1007 12:55:01.462674  424792 start.go:96] Skipping create...Using existing machine configuration
	I1007 12:55:01.462679  424792 fix.go:54] fixHost starting: 
	I1007 12:55:01.462980  424792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:55:01.463018  424792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:55:01.478386  424792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45311
	I1007 12:55:01.478938  424792 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:55:01.479500  424792 main.go:141] libmachine: Using API Version  1
	I1007 12:55:01.479517  424792 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:55:01.479903  424792 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:55:01.480101  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	I1007 12:55:01.480274  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetState
	I1007 12:55:01.481967  424792 fix.go:112] recreateIfNeeded on test-preload-474339: state=Stopped err=<nil>
	I1007 12:55:01.481998  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	W1007 12:55:01.482152  424792 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 12:55:01.484294  424792 out.go:177] * Restarting existing kvm2 VM for "test-preload-474339" ...
	I1007 12:55:01.485506  424792 main.go:141] libmachine: (test-preload-474339) Calling .Start
	I1007 12:55:01.485689  424792 main.go:141] libmachine: (test-preload-474339) Ensuring networks are active...
	I1007 12:55:01.486438  424792 main.go:141] libmachine: (test-preload-474339) Ensuring network default is active
	I1007 12:55:01.486747  424792 main.go:141] libmachine: (test-preload-474339) Ensuring network mk-test-preload-474339 is active
	I1007 12:55:01.487085  424792 main.go:141] libmachine: (test-preload-474339) Getting domain xml...
	I1007 12:55:01.487984  424792 main.go:141] libmachine: (test-preload-474339) Creating domain...
	I1007 12:55:02.725550  424792 main.go:141] libmachine: (test-preload-474339) Waiting to get IP...
	I1007 12:55:02.726404  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:02.726832  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:02.726870  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:02.726801  424843 retry.go:31] will retry after 242.090439ms: waiting for machine to come up
	I1007 12:55:02.970609  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:02.971035  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:02.971066  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:02.970993  424843 retry.go:31] will retry after 251.785786ms: waiting for machine to come up
	I1007 12:55:03.224607  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:03.225032  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:03.225100  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:03.224994  424843 retry.go:31] will retry after 353.411233ms: waiting for machine to come up
	I1007 12:55:03.579588  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:03.580044  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:03.580067  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:03.579987  424843 retry.go:31] will retry after 601.588373ms: waiting for machine to come up
	I1007 12:55:04.183016  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:04.183546  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:04.183565  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:04.183488  424843 retry.go:31] will retry after 490.639512ms: waiting for machine to come up
	I1007 12:55:04.676416  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:04.676952  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:04.676979  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:04.676887  424843 retry.go:31] will retry after 634.995432ms: waiting for machine to come up
	I1007 12:55:05.314206  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:05.314664  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:05.314685  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:05.314631  424843 retry.go:31] will retry after 1.166318856s: waiting for machine to come up
	I1007 12:55:06.482427  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:06.482878  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:06.482905  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:06.482839  424843 retry.go:31] will retry after 1.427047814s: waiting for machine to come up
	I1007 12:55:07.912634  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:07.913061  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:07.913091  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:07.912978  424843 retry.go:31] will retry after 1.133053183s: waiting for machine to come up
	I1007 12:55:09.047536  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:09.047933  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:09.047958  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:09.047888  424843 retry.go:31] will retry after 1.64365374s: waiting for machine to come up
	I1007 12:55:10.693766  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:10.694247  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:10.694269  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:10.694202  424843 retry.go:31] will retry after 2.846809429s: waiting for machine to come up
	I1007 12:55:13.542634  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:13.543057  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:13.543079  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:13.543009  424843 retry.go:31] will retry after 2.402555115s: waiting for machine to come up
	I1007 12:55:15.948360  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:15.948757  424792 main.go:141] libmachine: (test-preload-474339) DBG | unable to find current IP address of domain test-preload-474339 in network mk-test-preload-474339
	I1007 12:55:15.948787  424792 main.go:141] libmachine: (test-preload-474339) DBG | I1007 12:55:15.948707  424843 retry.go:31] will retry after 3.665259693s: waiting for machine to come up
	I1007 12:55:19.618259  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:19.618760  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has current primary IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:19.618794  424792 main.go:141] libmachine: (test-preload-474339) Found IP for machine: 192.168.39.79
	I1007 12:55:19.618808  424792 main.go:141] libmachine: (test-preload-474339) Reserving static IP address...
	I1007 12:55:19.619343  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "test-preload-474339", mac: "52:54:00:56:a6:3b", ip: "192.168.39.79"} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:19.619383  424792 main.go:141] libmachine: (test-preload-474339) DBG | skip adding static IP to network mk-test-preload-474339 - found existing host DHCP lease matching {name: "test-preload-474339", mac: "52:54:00:56:a6:3b", ip: "192.168.39.79"}
	I1007 12:55:19.619399  424792 main.go:141] libmachine: (test-preload-474339) Reserved static IP address: 192.168.39.79
	I1007 12:55:19.619411  424792 main.go:141] libmachine: (test-preload-474339) Waiting for SSH to be available...
	I1007 12:55:19.619422  424792 main.go:141] libmachine: (test-preload-474339) DBG | Getting to WaitForSSH function...
	I1007 12:55:19.621930  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:19.622209  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:19.622257  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:19.622419  424792 main.go:141] libmachine: (test-preload-474339) DBG | Using SSH client type: external
	I1007 12:55:19.622440  424792 main.go:141] libmachine: (test-preload-474339) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/test-preload-474339/id_rsa (-rw-------)
	I1007 12:55:19.622478  424792 main.go:141] libmachine: (test-preload-474339) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.79 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/test-preload-474339/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 12:55:19.622491  424792 main.go:141] libmachine: (test-preload-474339) DBG | About to run SSH command:
	I1007 12:55:19.622503  424792 main.go:141] libmachine: (test-preload-474339) DBG | exit 0
	I1007 12:55:19.751161  424792 main.go:141] libmachine: (test-preload-474339) DBG | SSH cmd err, output: <nil>: 
	I1007 12:55:19.751561  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetConfigRaw
	I1007 12:55:19.752208  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetIP
	I1007 12:55:19.754633  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:19.755035  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:19.755068  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:19.755344  424792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/config.json ...
	I1007 12:55:19.755595  424792 machine.go:93] provisionDockerMachine start ...
	I1007 12:55:19.755621  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	I1007 12:55:19.755828  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:19.758022  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:19.758352  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:19.758373  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:19.758528  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHPort
	I1007 12:55:19.758694  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:19.758819  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:19.759076  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHUsername
	I1007 12:55:19.759282  424792 main.go:141] libmachine: Using SSH client type: native
	I1007 12:55:19.759482  424792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I1007 12:55:19.759493  424792 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:55:19.875516  424792 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 12:55:19.875556  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetMachineName
	I1007 12:55:19.875818  424792 buildroot.go:166] provisioning hostname "test-preload-474339"
	I1007 12:55:19.875857  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetMachineName
	I1007 12:55:19.876079  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:19.878485  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:19.878792  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:19.878824  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:19.878973  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHPort
	I1007 12:55:19.879170  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:19.879333  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:19.879478  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHUsername
	I1007 12:55:19.879638  424792 main.go:141] libmachine: Using SSH client type: native
	I1007 12:55:19.879814  424792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I1007 12:55:19.879826  424792 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-474339 && echo "test-preload-474339" | sudo tee /etc/hostname
	I1007 12:55:20.011389  424792 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-474339
	
	I1007 12:55:20.011423  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:20.014130  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.014559  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:20.014615  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.014735  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHPort
	I1007 12:55:20.015002  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:20.015226  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:20.015405  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHUsername
	I1007 12:55:20.015652  424792 main.go:141] libmachine: Using SSH client type: native
	I1007 12:55:20.015875  424792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I1007 12:55:20.015898  424792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-474339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-474339/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-474339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:55:20.141805  424792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:55:20.141840  424792 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 12:55:20.141867  424792 buildroot.go:174] setting up certificates
	I1007 12:55:20.141879  424792 provision.go:84] configureAuth start
	I1007 12:55:20.141891  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetMachineName
	I1007 12:55:20.142243  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetIP
	I1007 12:55:20.144794  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.145211  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:20.145242  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.145401  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:20.147628  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.148018  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:20.148053  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.148195  424792 provision.go:143] copyHostCerts
	I1007 12:55:20.148257  424792 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 12:55:20.148271  424792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 12:55:20.148359  424792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 12:55:20.148496  424792 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 12:55:20.148507  424792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 12:55:20.148542  424792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 12:55:20.148625  424792 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 12:55:20.148635  424792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 12:55:20.148667  424792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 12:55:20.148733  424792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.test-preload-474339 san=[127.0.0.1 192.168.39.79 localhost minikube test-preload-474339]
	I1007 12:55:20.206896  424792 provision.go:177] copyRemoteCerts
	I1007 12:55:20.206990  424792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:55:20.207027  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:20.209852  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.210182  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:20.210217  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.210410  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHPort
	I1007 12:55:20.210616  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:20.210763  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHUsername
	I1007 12:55:20.210890  424792 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/test-preload-474339/id_rsa Username:docker}
	I1007 12:55:20.302225  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:55:20.330615  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1007 12:55:20.356021  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 12:55:20.382255  424792 provision.go:87] duration metric: took 240.358908ms to configureAuth
	I1007 12:55:20.382285  424792 buildroot.go:189] setting minikube options for container-runtime
	I1007 12:55:20.382479  424792 config.go:182] Loaded profile config "test-preload-474339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1007 12:55:20.382572  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:20.385199  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.385578  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:20.385618  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.385795  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHPort
	I1007 12:55:20.386021  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:20.386244  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:20.386416  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHUsername
	I1007 12:55:20.386635  424792 main.go:141] libmachine: Using SSH client type: native
	I1007 12:55:20.386817  424792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I1007 12:55:20.386834  424792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 12:55:20.628592  424792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 12:55:20.628623  424792 machine.go:96] duration metric: took 873.010471ms to provisionDockerMachine
	I1007 12:55:20.628640  424792 start.go:293] postStartSetup for "test-preload-474339" (driver="kvm2")
	I1007 12:55:20.628655  424792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:55:20.628678  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	I1007 12:55:20.629042  424792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:55:20.629081  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:20.631544  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.631880  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:20.631914  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.632038  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHPort
	I1007 12:55:20.632246  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:20.632389  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHUsername
	I1007 12:55:20.632526  424792 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/test-preload-474339/id_rsa Username:docker}
	I1007 12:55:20.722708  424792 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:55:20.727832  424792 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 12:55:20.727863  424792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 12:55:20.727939  424792 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 12:55:20.728013  424792 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 12:55:20.728101  424792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 12:55:20.738880  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:55:20.765698  424792 start.go:296] duration metric: took 137.037499ms for postStartSetup
	I1007 12:55:20.765772  424792 fix.go:56] duration metric: took 19.303092632s for fixHost
	I1007 12:55:20.765799  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:20.768956  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.769321  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:20.769353  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.769514  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHPort
	I1007 12:55:20.769749  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:20.769907  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:20.770056  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHUsername
	I1007 12:55:20.770235  424792 main.go:141] libmachine: Using SSH client type: native
	I1007 12:55:20.770429  424792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.39.79 22 <nil> <nil>}
	I1007 12:55:20.770441  424792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 12:55:20.888347  424792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728305720.844958814
	
	I1007 12:55:20.888376  424792 fix.go:216] guest clock: 1728305720.844958814
	I1007 12:55:20.888386  424792 fix.go:229] Guest: 2024-10-07 12:55:20.844958814 +0000 UTC Remote: 2024-10-07 12:55:20.76577774 +0000 UTC m=+22.832761259 (delta=79.181074ms)
	I1007 12:55:20.888411  424792 fix.go:200] guest clock delta is within tolerance: 79.181074ms
	I1007 12:55:20.888422  424792 start.go:83] releasing machines lock for "test-preload-474339", held for 19.425748792s
	I1007 12:55:20.888444  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	I1007 12:55:20.888710  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetIP
	I1007 12:55:20.891552  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.891882  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:20.891917  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.892048  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	I1007 12:55:20.892626  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	I1007 12:55:20.892798  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	I1007 12:55:20.892919  424792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:55:20.892965  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:20.893044  424792 ssh_runner.go:195] Run: cat /version.json
	I1007 12:55:20.893075  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:20.895843  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.895873  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.896311  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:20.896338  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.896389  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:20.896415  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:20.896486  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHPort
	I1007 12:55:20.896692  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHPort
	I1007 12:55:20.896695  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:20.896883  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:20.896898  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHUsername
	I1007 12:55:20.897045  424792 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/test-preload-474339/id_rsa Username:docker}
	I1007 12:55:20.897146  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHUsername
	I1007 12:55:20.897312  424792 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/test-preload-474339/id_rsa Username:docker}
	I1007 12:55:20.980642  424792 ssh_runner.go:195] Run: systemctl --version
	I1007 12:55:21.005363  424792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 12:55:21.154839  424792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 12:55:21.161575  424792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 12:55:21.161657  424792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:55:21.179807  424792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 12:55:21.179844  424792 start.go:495] detecting cgroup driver to use...
	I1007 12:55:21.179912  424792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 12:55:21.197934  424792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 12:55:21.213677  424792 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:55:21.213755  424792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:55:21.229051  424792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:55:21.243924  424792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:55:21.362202  424792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:55:21.497256  424792 docker.go:233] disabling docker service ...
	I1007 12:55:21.497342  424792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:55:21.512602  424792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:55:21.526049  424792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:55:21.670416  424792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:55:21.813462  424792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:55:21.828788  424792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:55:21.850442  424792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1007 12:55:21.850519  424792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:55:21.862936  424792 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 12:55:21.863026  424792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:55:21.875610  424792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:55:21.887935  424792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:55:21.900496  424792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:55:21.913139  424792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:55:21.925236  424792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:55:21.944196  424792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 12:55:21.956706  424792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:55:21.967824  424792 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 12:55:21.967900  424792 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 12:55:21.982518  424792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:55:21.993647  424792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:55:22.113948  424792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 12:55:22.210135  424792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 12:55:22.210212  424792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 12:55:22.215202  424792 start.go:563] Will wait 60s for crictl version
	I1007 12:55:22.215284  424792 ssh_runner.go:195] Run: which crictl
	I1007 12:55:22.219581  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:55:22.265373  424792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 12:55:22.265468  424792 ssh_runner.go:195] Run: crio --version
	I1007 12:55:22.296039  424792 ssh_runner.go:195] Run: crio --version
	I1007 12:55:22.328252  424792 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1007 12:55:22.329757  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetIP
	I1007 12:55:22.332571  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:22.332934  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:22.332970  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:22.333225  424792 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1007 12:55:22.337704  424792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:55:22.351716  424792 kubeadm.go:883] updating cluster {Name:test-preload-474339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.24.4 ClusterName:test-preload-474339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:55:22.351985  424792 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1007 12:55:22.352091  424792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:55:22.391765  424792 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1007 12:55:22.391857  424792 ssh_runner.go:195] Run: which lz4
	I1007 12:55:22.396381  424792 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 12:55:22.400852  424792 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 12:55:22.400893  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1007 12:55:24.030865  424792 crio.go:462] duration metric: took 1.634534245s to copy over tarball
	I1007 12:55:24.030949  424792 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 12:55:26.516138  424792 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.485152636s)
	I1007 12:55:26.516175  424792 crio.go:469] duration metric: took 2.485274509s to extract the tarball
	I1007 12:55:26.516187  424792 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 12:55:26.558701  424792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:55:26.605028  424792 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1007 12:55:26.605059  424792 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 12:55:26.605141  424792 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:55:26.605201  424792 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1007 12:55:26.605239  424792 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1007 12:55:26.605268  424792 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1007 12:55:26.605175  424792 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1007 12:55:26.605223  424792 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 12:55:26.605202  424792 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1007 12:55:26.605222  424792 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 12:55:26.606576  424792 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1007 12:55:26.606634  424792 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1007 12:55:26.606645  424792 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1007 12:55:26.606674  424792 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1007 12:55:26.606575  424792 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 12:55:26.606716  424792 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:55:26.606576  424792 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 12:55:26.607052  424792 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1007 12:55:26.773365  424792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1007 12:55:26.773617  424792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1007 12:55:26.778684  424792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1007 12:55:26.778723  424792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1007 12:55:26.795370  424792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 12:55:26.811279  424792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1007 12:55:26.839088  424792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1007 12:55:26.906154  424792 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1007 12:55:26.906210  424792 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1007 12:55:26.906232  424792 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1007 12:55:26.906273  424792 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1007 12:55:26.906281  424792 ssh_runner.go:195] Run: which crictl
	I1007 12:55:26.906317  424792 ssh_runner.go:195] Run: which crictl
	I1007 12:55:26.951670  424792 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1007 12:55:26.951693  424792 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1007 12:55:26.951728  424792 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1007 12:55:26.951727  424792 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 12:55:26.951785  424792 ssh_runner.go:195] Run: which crictl
	I1007 12:55:26.951785  424792 ssh_runner.go:195] Run: which crictl
	I1007 12:55:26.951834  424792 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1007 12:55:26.951944  424792 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 12:55:26.951969  424792 ssh_runner.go:195] Run: which crictl
	I1007 12:55:26.969160  424792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:55:26.974362  424792 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1007 12:55:26.974412  424792 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1007 12:55:26.974455  424792 ssh_runner.go:195] Run: which crictl
	I1007 12:55:26.978499  424792 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1007 12:55:26.978553  424792 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1007 12:55:26.978596  424792 ssh_runner.go:195] Run: which crictl
	I1007 12:55:26.978595  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1007 12:55:26.978644  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1007 12:55:26.978697  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1007 12:55:26.978721  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 12:55:26.978796  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 12:55:27.204651  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1007 12:55:27.204720  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1007 12:55:27.204777  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1007 12:55:27.204809  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1007 12:55:27.204883  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 12:55:27.204907  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1007 12:55:27.205030  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 12:55:27.364606  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1007 12:55:27.364663  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1007 12:55:27.364711  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1007 12:55:27.364787  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 12:55:27.364813  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1007 12:55:27.364826  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1007 12:55:27.364900  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 12:55:27.510677  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1007 12:55:27.545860  424792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1007 12:55:27.545889  424792 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1007 12:55:27.545972  424792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1007 12:55:27.548677  424792 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1007 12:55:27.548741  424792 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1007 12:55:27.548770  424792 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1007 12:55:27.548794  424792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1007 12:55:27.548677  424792 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1007 12:55:27.548834  424792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1007 12:55:27.548929  424792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1007 12:55:27.548837  424792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1007 12:55:27.602612  424792 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1007 12:55:27.602625  424792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1007 12:55:27.602716  424792 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1007 12:55:27.602742  424792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1007 12:55:27.602786  424792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1007 12:55:27.603950  424792 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1007 12:55:27.604068  424792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1007 12:55:29.764347  424792 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.215522169s)
	I1007 12:55:29.764378  424792 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.21544107s)
	I1007 12:55:29.764380  424792 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.215428716s)
	I1007 12:55:29.764401  424792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1007 12:55:29.764402  424792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1007 12:55:29.764407  424792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1007 12:55:29.764400  424792 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.161637035s)
	I1007 12:55:29.764414  424792 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (2.160332603s)
	I1007 12:55:29.764424  424792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1007 12:55:29.764358  424792 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: (2.215387789s)
	I1007 12:55:29.764434  424792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1007 12:55:29.764436  424792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1007 12:55:29.764416  424792 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.161610041s)
	I1007 12:55:29.764448  424792 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1007 12:55:29.764485  424792 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1007 12:55:29.764530  424792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1007 12:55:30.516907  424792 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1007 12:55:30.516963  424792 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1007 12:55:30.517016  424792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1007 12:55:31.270493  424792 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1007 12:55:31.270536  424792 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1007 12:55:31.270581  424792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1007 12:55:31.719539  424792 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1007 12:55:31.719599  424792 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1007 12:55:31.719673  424792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1007 12:55:32.569472  424792 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1007 12:55:32.569534  424792 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1007 12:55:32.569596  424792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1007 12:55:34.826185  424792 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.25654713s)
	I1007 12:55:34.826232  424792 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1007 12:55:34.826277  424792 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1007 12:55:34.826337  424792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1007 12:55:34.974814  424792 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1007 12:55:34.974875  424792 cache_images.go:123] Successfully loaded all cached images
	I1007 12:55:34.974883  424792 cache_images.go:92] duration metric: took 8.369808464s to LoadCachedImages
	I1007 12:55:34.974902  424792 kubeadm.go:934] updating node { 192.168.39.79 8443 v1.24.4 crio true true} ...
	I1007 12:55:34.975099  424792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-474339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.79
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-474339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:55:34.975221  424792 ssh_runner.go:195] Run: crio config
	I1007 12:55:35.032352  424792 cni.go:84] Creating CNI manager for ""
	I1007 12:55:35.032384  424792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:55:35.032398  424792 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:55:35.032417  424792 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.79 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-474339 NodeName:test-preload-474339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.79"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.79 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:55:35.032609  424792 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.79
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-474339"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.79
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.79"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:55:35.032680  424792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1007 12:55:35.043739  424792 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:55:35.043837  424792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 12:55:35.055091  424792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1007 12:55:35.074559  424792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:55:35.093845  424792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1007 12:55:35.114816  424792 ssh_runner.go:195] Run: grep 192.168.39.79	control-plane.minikube.internal$ /etc/hosts
	I1007 12:55:35.119383  424792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.79	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:55:35.133092  424792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:55:35.271849  424792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:55:35.290739  424792 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339 for IP: 192.168.39.79
	I1007 12:55:35.290771  424792 certs.go:194] generating shared ca certs ...
	I1007 12:55:35.290796  424792 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:55:35.291040  424792 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 12:55:35.291117  424792 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 12:55:35.291134  424792 certs.go:256] generating profile certs ...
	I1007 12:55:35.291243  424792 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/client.key
	I1007 12:55:35.291334  424792 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/apiserver.key.e09611d4
	I1007 12:55:35.291386  424792 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/proxy-client.key
	I1007 12:55:35.291554  424792 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 12:55:35.291585  424792 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 12:55:35.291592  424792 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 12:55:35.291621  424792 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:55:35.291642  424792 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:55:35.291664  424792 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 12:55:35.291710  424792 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 12:55:35.292581  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:55:35.336349  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 12:55:35.385000  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:55:35.417840  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 12:55:35.450652  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1007 12:55:35.501463  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:55:35.541444  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:55:35.570138  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 12:55:35.599895  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 12:55:35.626188  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:55:35.652828  424792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 12:55:35.679622  424792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:55:35.697871  424792 ssh_runner.go:195] Run: openssl version
	I1007 12:55:35.704292  424792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 12:55:35.715877  424792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 12:55:35.720782  424792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 12:55:35.720849  424792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 12:55:35.727269  424792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 12:55:35.738944  424792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:55:35.750552  424792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:55:35.755762  424792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:55:35.755852  424792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:55:35.762135  424792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:55:35.773889  424792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 12:55:35.785613  424792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 12:55:35.790753  424792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 12:55:35.790837  424792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 12:55:35.797141  424792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 12:55:35.808799  424792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:55:35.813863  424792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 12:55:35.820401  424792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 12:55:35.826798  424792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 12:55:35.833945  424792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 12:55:35.840697  424792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 12:55:35.847752  424792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 12:55:35.854136  424792 kubeadm.go:392] StartCluster: {Name:test-preload-474339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.24.4 ClusterName:test-preload-474339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:55:35.854238  424792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 12:55:35.854299  424792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:55:35.893314  424792 cri.go:89] found id: ""
	I1007 12:55:35.893406  424792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:55:35.904137  424792 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 12:55:35.904162  424792 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 12:55:35.904216  424792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 12:55:35.914871  424792 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 12:55:35.915391  424792 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-474339" does not appear in /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:55:35.915587  424792 kubeconfig.go:62] /home/jenkins/minikube-integration/19763-377026/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-474339" cluster setting kubeconfig missing "test-preload-474339" context setting]
	I1007 12:55:35.915894  424792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:55:35.916553  424792 kapi.go:59] client config for test-preload-474339: &rest.Config{Host:"https://192.168.39.79:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:55:35.917256  424792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 12:55:35.927722  424792 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.79
	I1007 12:55:35.927767  424792 kubeadm.go:1160] stopping kube-system containers ...
	I1007 12:55:35.927781  424792 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1007 12:55:35.927839  424792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:55:35.966390  424792 cri.go:89] found id: ""
	I1007 12:55:35.966467  424792 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 12:55:35.983361  424792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:55:35.993507  424792 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:55:35.993536  424792 kubeadm.go:157] found existing configuration files:
	
	I1007 12:55:35.993599  424792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:55:36.003198  424792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:55:36.003258  424792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:55:36.013502  424792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:55:36.023144  424792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:55:36.023241  424792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:55:36.033915  424792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:55:36.044680  424792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:55:36.044745  424792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:55:36.055900  424792 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:55:36.065747  424792 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:55:36.065809  424792 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:55:36.076275  424792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:55:36.087399  424792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:55:36.187442  424792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:55:36.706120  424792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:55:36.999294  424792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:55:37.085087  424792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:55:37.213399  424792 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:55:37.213529  424792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:55:37.713948  424792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:55:38.214592  424792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:55:38.250845  424792 api_server.go:72] duration metric: took 1.037453279s to wait for apiserver process to appear ...
	I1007 12:55:38.250886  424792 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:55:38.250914  424792 api_server.go:253] Checking apiserver healthz at https://192.168.39.79:8443/healthz ...
	I1007 12:55:38.251501  424792 api_server.go:269] stopped: https://192.168.39.79:8443/healthz: Get "https://192.168.39.79:8443/healthz": dial tcp 192.168.39.79:8443: connect: connection refused
	I1007 12:55:38.751090  424792 api_server.go:253] Checking apiserver healthz at https://192.168.39.79:8443/healthz ...
	I1007 12:55:42.534676  424792 api_server.go:279] https://192.168.39.79:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 12:55:42.534708  424792 api_server.go:103] status: https://192.168.39.79:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 12:55:42.534723  424792 api_server.go:253] Checking apiserver healthz at https://192.168.39.79:8443/healthz ...
	I1007 12:55:42.624047  424792 api_server.go:279] https://192.168.39.79:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1007 12:55:42.624079  424792 api_server.go:103] status: https://192.168.39.79:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1007 12:55:42.751383  424792 api_server.go:253] Checking apiserver healthz at https://192.168.39.79:8443/healthz ...
	I1007 12:55:42.765713  424792 api_server.go:279] https://192.168.39.79:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:55:42.765754  424792 api_server.go:103] status: https://192.168.39.79:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:55:43.251750  424792 api_server.go:253] Checking apiserver healthz at https://192.168.39.79:8443/healthz ...
	I1007 12:55:43.258084  424792 api_server.go:279] https://192.168.39.79:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1007 12:55:43.258124  424792 api_server.go:103] status: https://192.168.39.79:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1007 12:55:43.751838  424792 api_server.go:253] Checking apiserver healthz at https://192.168.39.79:8443/healthz ...
	I1007 12:55:43.757711  424792 api_server.go:279] https://192.168.39.79:8443/healthz returned 200:
	ok
	I1007 12:55:43.765342  424792 api_server.go:141] control plane version: v1.24.4
	I1007 12:55:43.765380  424792 api_server.go:131] duration metric: took 5.514487118s to wait for apiserver health ...
	I1007 12:55:43.765403  424792 cni.go:84] Creating CNI manager for ""
	I1007 12:55:43.765411  424792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 12:55:43.767249  424792 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 12:55:43.768671  424792 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 12:55:43.781686  424792 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 12:55:43.802178  424792 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:55:43.802325  424792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 12:55:43.802351  424792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 12:55:43.813125  424792 system_pods.go:59] 8 kube-system pods found
	I1007 12:55:43.813169  424792 system_pods.go:61] "coredns-6d4b75cb6d-l9hh5" [426a22eb-70a1-417f-972d-33fdf72fac11] Running
	I1007 12:55:43.813175  424792 system_pods.go:61] "coredns-6d4b75cb6d-rdnrz" [3faeec35-44dd-4911-8d43-f94ba92ecedf] Running
	I1007 12:55:43.813184  424792 system_pods.go:61] "etcd-test-preload-474339" [3f59640d-32b5-4288-b6c8-2a383db8eea1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1007 12:55:43.813190  424792 system_pods.go:61] "kube-apiserver-test-preload-474339" [1aaa597e-dcc7-4114-b649-5ada932355d4] Running
	I1007 12:55:43.813197  424792 system_pods.go:61] "kube-controller-manager-test-preload-474339" [c2cc9f7a-5cb8-4f86-ab27-e02cda962dc1] Running
	I1007 12:55:43.813203  424792 system_pods.go:61] "kube-proxy-777v2" [fbd1e15a-ac71-46da-ba4a-7fc894cb87c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1007 12:55:43.813214  424792 system_pods.go:61] "kube-scheduler-test-preload-474339" [7a9e2155-0abb-4828-ad5c-23d01d018a76] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1007 12:55:43.813223  424792 system_pods.go:61] "storage-provisioner" [e3f7478e-fa61-4331-aa87-bc73a227f7ef] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1007 12:55:43.813232  424792 system_pods.go:74] duration metric: took 11.021269ms to wait for pod list to return data ...
	I1007 12:55:43.813241  424792 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:55:43.818902  424792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:55:43.818968  424792 node_conditions.go:123] node cpu capacity is 2
	I1007 12:55:43.818984  424792 node_conditions.go:105] duration metric: took 5.736276ms to run NodePressure ...
	I1007 12:55:43.819009  424792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 12:55:44.051282  424792 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1007 12:55:44.058457  424792 kubeadm.go:739] kubelet initialised
	I1007 12:55:44.058488  424792 kubeadm.go:740] duration metric: took 7.177114ms waiting for restarted kubelet to initialise ...
	I1007 12:55:44.058498  424792 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:55:44.066678  424792 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-l9hh5" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:44.076099  424792 pod_ready.go:98] node "test-preload-474339" hosting pod "coredns-6d4b75cb6d-l9hh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:44.076126  424792 pod_ready.go:82] duration metric: took 9.412751ms for pod "coredns-6d4b75cb6d-l9hh5" in "kube-system" namespace to be "Ready" ...
	E1007 12:55:44.076138  424792 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-474339" hosting pod "coredns-6d4b75cb6d-l9hh5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:44.076146  424792 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-rdnrz" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:44.084275  424792 pod_ready.go:98] node "test-preload-474339" hosting pod "coredns-6d4b75cb6d-rdnrz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:44.084305  424792 pod_ready.go:82] duration metric: took 8.148856ms for pod "coredns-6d4b75cb6d-rdnrz" in "kube-system" namespace to be "Ready" ...
	E1007 12:55:44.084314  424792 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-474339" hosting pod "coredns-6d4b75cb6d-rdnrz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:44.084322  424792 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:44.092506  424792 pod_ready.go:98] node "test-preload-474339" hosting pod "etcd-test-preload-474339" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:44.092542  424792 pod_ready.go:82] duration metric: took 8.209483ms for pod "etcd-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	E1007 12:55:44.092556  424792 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-474339" hosting pod "etcd-test-preload-474339" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:44.092564  424792 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:44.209533  424792 pod_ready.go:98] node "test-preload-474339" hosting pod "kube-apiserver-test-preload-474339" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:44.209563  424792 pod_ready.go:82] duration metric: took 116.988861ms for pod "kube-apiserver-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	E1007 12:55:44.209576  424792 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-474339" hosting pod "kube-apiserver-test-preload-474339" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:44.209586  424792 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:44.606636  424792 pod_ready.go:98] node "test-preload-474339" hosting pod "kube-controller-manager-test-preload-474339" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:44.606669  424792 pod_ready.go:82] duration metric: took 397.071903ms for pod "kube-controller-manager-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	E1007 12:55:44.606679  424792 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-474339" hosting pod "kube-controller-manager-test-preload-474339" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:44.606691  424792 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-777v2" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:45.006431  424792 pod_ready.go:98] node "test-preload-474339" hosting pod "kube-proxy-777v2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:45.006461  424792 pod_ready.go:82] duration metric: took 399.760024ms for pod "kube-proxy-777v2" in "kube-system" namespace to be "Ready" ...
	E1007 12:55:45.006470  424792 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-474339" hosting pod "kube-proxy-777v2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:45.006477  424792 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:45.406452  424792 pod_ready.go:98] node "test-preload-474339" hosting pod "kube-scheduler-test-preload-474339" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:45.406482  424792 pod_ready.go:82] duration metric: took 399.998799ms for pod "kube-scheduler-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	E1007 12:55:45.406492  424792 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-474339" hosting pod "kube-scheduler-test-preload-474339" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:45.406499  424792 pod_ready.go:39] duration metric: took 1.34799063s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:55:45.406519  424792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:55:45.419130  424792 ops.go:34] apiserver oom_adj: -16
	I1007 12:55:45.419159  424792 kubeadm.go:597] duration metric: took 9.514990598s to restartPrimaryControlPlane
	I1007 12:55:45.419183  424792 kubeadm.go:394] duration metric: took 9.565040956s to StartCluster
	I1007 12:55:45.419201  424792 settings.go:142] acquiring lock: {Name:mk1ff033f29b570679652ae5ee30e0799b0658dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:55:45.419284  424792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 12:55:45.419956  424792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:55:45.420236  424792 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.79 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 12:55:45.420281  424792 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 12:55:45.420399  424792 addons.go:69] Setting storage-provisioner=true in profile "test-preload-474339"
	I1007 12:55:45.420425  424792 addons.go:234] Setting addon storage-provisioner=true in "test-preload-474339"
	W1007 12:55:45.420434  424792 addons.go:243] addon storage-provisioner should already be in state true
	I1007 12:55:45.420465  424792 addons.go:69] Setting default-storageclass=true in profile "test-preload-474339"
	I1007 12:55:45.420481  424792 config.go:182] Loaded profile config "test-preload-474339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1007 12:55:45.420507  424792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-474339"
	I1007 12:55:45.420478  424792 host.go:66] Checking if "test-preload-474339" exists ...
	I1007 12:55:45.420855  424792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:55:45.420900  424792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:55:45.420998  424792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:55:45.421054  424792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:55:45.423064  424792 out.go:177] * Verifying Kubernetes components...
	I1007 12:55:45.424371  424792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:55:45.437822  424792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41327
	I1007 12:55:45.437823  424792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43639
	I1007 12:55:45.438433  424792 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:55:45.438448  424792 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:55:45.438917  424792 main.go:141] libmachine: Using API Version  1
	I1007 12:55:45.438933  424792 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:55:45.439067  424792 main.go:141] libmachine: Using API Version  1
	I1007 12:55:45.439091  424792 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:55:45.439317  424792 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:55:45.439422  424792 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:55:45.439478  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetState
	I1007 12:55:45.440020  424792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:55:45.440078  424792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:55:45.441701  424792 kapi.go:59] client config for test-preload-474339: &rest.Config{Host:"https://192.168.39.79:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/client.crt", KeyFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/profiles/test-preload-474339/client.key", CAFile:"/home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f0a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 12:55:45.442000  424792 addons.go:234] Setting addon default-storageclass=true in "test-preload-474339"
	W1007 12:55:45.442018  424792 addons.go:243] addon default-storageclass should already be in state true
	I1007 12:55:45.442049  424792 host.go:66] Checking if "test-preload-474339" exists ...
	I1007 12:55:45.442398  424792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:55:45.442448  424792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:55:45.456552  424792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35243
	I1007 12:55:45.457043  424792 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:55:45.457588  424792 main.go:141] libmachine: Using API Version  1
	I1007 12:55:45.457616  424792 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:55:45.457959  424792 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:55:45.458117  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetState
	I1007 12:55:45.458495  424792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46103
	I1007 12:55:45.458936  424792 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:55:45.459535  424792 main.go:141] libmachine: Using API Version  1
	I1007 12:55:45.459558  424792 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:55:45.459747  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	I1007 12:55:45.459892  424792 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:55:45.460315  424792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:55:45.460359  424792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:55:45.461763  424792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:55:45.463344  424792 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:55:45.463368  424792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:55:45.463391  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:45.467027  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:45.467537  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:45.467565  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:45.467783  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHPort
	I1007 12:55:45.467965  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:45.468129  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHUsername
	I1007 12:55:45.468259  424792 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/test-preload-474339/id_rsa Username:docker}
	I1007 12:55:45.502845  424792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37275
	I1007 12:55:45.503403  424792 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:55:45.503956  424792 main.go:141] libmachine: Using API Version  1
	I1007 12:55:45.503977  424792 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:55:45.504339  424792 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:55:45.504589  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetState
	I1007 12:55:45.506164  424792 main.go:141] libmachine: (test-preload-474339) Calling .DriverName
	I1007 12:55:45.506400  424792 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:55:45.506418  424792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:55:45.506437  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHHostname
	I1007 12:55:45.509092  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:45.509487  424792 main.go:141] libmachine: (test-preload-474339) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:a6:3b", ip: ""} in network mk-test-preload-474339: {Iface:virbr1 ExpiryTime:2024-10-07 13:55:13 +0000 UTC Type:0 Mac:52:54:00:56:a6:3b Iaid: IPaddr:192.168.39.79 Prefix:24 Hostname:test-preload-474339 Clientid:01:52:54:00:56:a6:3b}
	I1007 12:55:45.509517  424792 main.go:141] libmachine: (test-preload-474339) DBG | domain test-preload-474339 has defined IP address 192.168.39.79 and MAC address 52:54:00:56:a6:3b in network mk-test-preload-474339
	I1007 12:55:45.509686  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHPort
	I1007 12:55:45.509868  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHKeyPath
	I1007 12:55:45.509999  424792 main.go:141] libmachine: (test-preload-474339) Calling .GetSSHUsername
	I1007 12:55:45.510121  424792 sshutil.go:53] new ssh client: &{IP:192.168.39.79 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/test-preload-474339/id_rsa Username:docker}
	I1007 12:55:45.637453  424792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:55:45.656037  424792 node_ready.go:35] waiting up to 6m0s for node "test-preload-474339" to be "Ready" ...
	I1007 12:55:45.747689  424792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:55:45.761649  424792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:55:46.780435  424792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.032704333s)
	I1007 12:55:46.780510  424792 main.go:141] libmachine: Making call to close driver server
	I1007 12:55:46.780522  424792 main.go:141] libmachine: (test-preload-474339) Calling .Close
	I1007 12:55:46.780529  424792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.018841721s)
	I1007 12:55:46.780574  424792 main.go:141] libmachine: Making call to close driver server
	I1007 12:55:46.780593  424792 main.go:141] libmachine: (test-preload-474339) Calling .Close
	I1007 12:55:46.780846  424792 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:55:46.780859  424792 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:55:46.780867  424792 main.go:141] libmachine: Making call to close driver server
	I1007 12:55:46.780873  424792 main.go:141] libmachine: (test-preload-474339) Calling .Close
	I1007 12:55:46.781218  424792 main.go:141] libmachine: (test-preload-474339) DBG | Closing plugin on server side
	I1007 12:55:46.781224  424792 main.go:141] libmachine: (test-preload-474339) DBG | Closing plugin on server side
	I1007 12:55:46.781274  424792 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:55:46.781284  424792 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:55:46.781294  424792 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:55:46.781296  424792 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:55:46.781305  424792 main.go:141] libmachine: Making call to close driver server
	I1007 12:55:46.781315  424792 main.go:141] libmachine: (test-preload-474339) Calling .Close
	I1007 12:55:46.781518  424792 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:55:46.781533  424792 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:55:46.781562  424792 main.go:141] libmachine: (test-preload-474339) DBG | Closing plugin on server side
	I1007 12:55:46.788782  424792 main.go:141] libmachine: Making call to close driver server
	I1007 12:55:46.788810  424792 main.go:141] libmachine: (test-preload-474339) Calling .Close
	I1007 12:55:46.789089  424792 main.go:141] libmachine: Successfully made call to close driver server
	I1007 12:55:46.789105  424792 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 12:55:46.791123  424792 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1007 12:55:46.792571  424792 addons.go:510] duration metric: took 1.372290645s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1007 12:55:47.660086  424792 node_ready.go:53] node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:50.159892  424792 node_ready.go:53] node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:52.161003  424792 node_ready.go:53] node "test-preload-474339" has status "Ready":"False"
	I1007 12:55:53.160027  424792 node_ready.go:49] node "test-preload-474339" has status "Ready":"True"
	I1007 12:55:53.160062  424792 node_ready.go:38] duration metric: took 7.503977671s for node "test-preload-474339" to be "Ready" ...
	I1007 12:55:53.160086  424792 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:55:53.165748  424792 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-rdnrz" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:53.171995  424792 pod_ready.go:93] pod "coredns-6d4b75cb6d-rdnrz" in "kube-system" namespace has status "Ready":"True"
	I1007 12:55:53.172023  424792 pod_ready.go:82] duration metric: took 6.234472ms for pod "coredns-6d4b75cb6d-rdnrz" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:53.172033  424792 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:53.177776  424792 pod_ready.go:93] pod "etcd-test-preload-474339" in "kube-system" namespace has status "Ready":"True"
	I1007 12:55:53.177800  424792 pod_ready.go:82] duration metric: took 5.761087ms for pod "etcd-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:53.177809  424792 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:53.685258  424792 pod_ready.go:93] pod "kube-apiserver-test-preload-474339" in "kube-system" namespace has status "Ready":"True"
	I1007 12:55:53.685290  424792 pod_ready.go:82] duration metric: took 507.47408ms for pod "kube-apiserver-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:53.685303  424792 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:53.690147  424792 pod_ready.go:93] pod "kube-controller-manager-test-preload-474339" in "kube-system" namespace has status "Ready":"True"
	I1007 12:55:53.690175  424792 pod_ready.go:82] duration metric: took 4.862783ms for pod "kube-controller-manager-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:53.690189  424792 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-777v2" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:53.961337  424792 pod_ready.go:93] pod "kube-proxy-777v2" in "kube-system" namespace has status "Ready":"True"
	I1007 12:55:53.961368  424792 pod_ready.go:82] duration metric: took 271.170584ms for pod "kube-proxy-777v2" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:53.961382  424792 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:54.360836  424792 pod_ready.go:93] pod "kube-scheduler-test-preload-474339" in "kube-system" namespace has status "Ready":"True"
	I1007 12:55:54.360869  424792 pod_ready.go:82] duration metric: took 399.479935ms for pod "kube-scheduler-test-preload-474339" in "kube-system" namespace to be "Ready" ...
	I1007 12:55:54.360881  424792 pod_ready.go:39] duration metric: took 1.200782196s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:55:54.360899  424792 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:55:54.360968  424792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:55:54.376932  424792 api_server.go:72] duration metric: took 8.9566474s to wait for apiserver process to appear ...
	I1007 12:55:54.376966  424792 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:55:54.376987  424792 api_server.go:253] Checking apiserver healthz at https://192.168.39.79:8443/healthz ...
	I1007 12:55:54.382357  424792 api_server.go:279] https://192.168.39.79:8443/healthz returned 200:
	ok
	I1007 12:55:54.383306  424792 api_server.go:141] control plane version: v1.24.4
	I1007 12:55:54.383336  424792 api_server.go:131] duration metric: took 6.360627ms to wait for apiserver health ...
	I1007 12:55:54.383345  424792 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:55:54.562873  424792 system_pods.go:59] 7 kube-system pods found
	I1007 12:55:54.562904  424792 system_pods.go:61] "coredns-6d4b75cb6d-rdnrz" [3faeec35-44dd-4911-8d43-f94ba92ecedf] Running
	I1007 12:55:54.562908  424792 system_pods.go:61] "etcd-test-preload-474339" [3f59640d-32b5-4288-b6c8-2a383db8eea1] Running
	I1007 12:55:54.562912  424792 system_pods.go:61] "kube-apiserver-test-preload-474339" [1aaa597e-dcc7-4114-b649-5ada932355d4] Running
	I1007 12:55:54.562916  424792 system_pods.go:61] "kube-controller-manager-test-preload-474339" [c2cc9f7a-5cb8-4f86-ab27-e02cda962dc1] Running
	I1007 12:55:54.562919  424792 system_pods.go:61] "kube-proxy-777v2" [fbd1e15a-ac71-46da-ba4a-7fc894cb87c2] Running
	I1007 12:55:54.562922  424792 system_pods.go:61] "kube-scheduler-test-preload-474339" [7a9e2155-0abb-4828-ad5c-23d01d018a76] Running
	I1007 12:55:54.562925  424792 system_pods.go:61] "storage-provisioner" [e3f7478e-fa61-4331-aa87-bc73a227f7ef] Running
	I1007 12:55:54.562932  424792 system_pods.go:74] duration metric: took 179.581238ms to wait for pod list to return data ...
	I1007 12:55:54.562944  424792 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:55:54.760854  424792 default_sa.go:45] found service account: "default"
	I1007 12:55:54.760910  424792 default_sa.go:55] duration metric: took 197.956143ms for default service account to be created ...
	I1007 12:55:54.760927  424792 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:55:54.963045  424792 system_pods.go:86] 7 kube-system pods found
	I1007 12:55:54.963082  424792 system_pods.go:89] "coredns-6d4b75cb6d-rdnrz" [3faeec35-44dd-4911-8d43-f94ba92ecedf] Running
	I1007 12:55:54.963090  424792 system_pods.go:89] "etcd-test-preload-474339" [3f59640d-32b5-4288-b6c8-2a383db8eea1] Running
	I1007 12:55:54.963095  424792 system_pods.go:89] "kube-apiserver-test-preload-474339" [1aaa597e-dcc7-4114-b649-5ada932355d4] Running
	I1007 12:55:54.963101  424792 system_pods.go:89] "kube-controller-manager-test-preload-474339" [c2cc9f7a-5cb8-4f86-ab27-e02cda962dc1] Running
	I1007 12:55:54.963114  424792 system_pods.go:89] "kube-proxy-777v2" [fbd1e15a-ac71-46da-ba4a-7fc894cb87c2] Running
	I1007 12:55:54.963120  424792 system_pods.go:89] "kube-scheduler-test-preload-474339" [7a9e2155-0abb-4828-ad5c-23d01d018a76] Running
	I1007 12:55:54.963124  424792 system_pods.go:89] "storage-provisioner" [e3f7478e-fa61-4331-aa87-bc73a227f7ef] Running
	I1007 12:55:54.963133  424792 system_pods.go:126] duration metric: took 202.197774ms to wait for k8s-apps to be running ...
	I1007 12:55:54.963142  424792 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:55:54.963216  424792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:55:54.979969  424792 system_svc.go:56] duration metric: took 16.815487ms WaitForService to wait for kubelet
	I1007 12:55:54.980002  424792 kubeadm.go:582] duration metric: took 9.559729819s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:55:54.980021  424792 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:55:55.160777  424792 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1007 12:55:55.160804  424792 node_conditions.go:123] node cpu capacity is 2
	I1007 12:55:55.160814  424792 node_conditions.go:105] duration metric: took 180.788475ms to run NodePressure ...
	I1007 12:55:55.160828  424792 start.go:241] waiting for startup goroutines ...
	I1007 12:55:55.160835  424792 start.go:246] waiting for cluster config update ...
	I1007 12:55:55.160845  424792 start.go:255] writing updated cluster config ...
	I1007 12:55:55.161114  424792 ssh_runner.go:195] Run: rm -f paused
	I1007 12:55:55.212141  424792 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I1007 12:55:55.214218  424792 out.go:201] 
	W1007 12:55:55.215618  424792 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I1007 12:55:55.216775  424792 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1007 12:55:55.218038  424792 out.go:177] * Done! kubectl is now configured to use "test-preload-474339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.271187493Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5586329-88e8-43eb-adc3-5dbe2587b554 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.273522931Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4076bef8-2be2-44fe-83a1-fe0a2daee90b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.274037544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728305756274009776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4076bef8-2be2-44fe-83a1-fe0a2daee90b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.274835970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=417902f9-1e1b-405b-b56e-bb903cae2df3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.274921808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=417902f9-1e1b-405b-b56e-bb903cae2df3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.275331804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f95a55327aa67d32f2c3f5f315452ae93bc6f6ee2bf43bba2bdb5ff5cf576243,PodSandboxId:e159e590823804a638b9aa3563b9eb7f5c4326c4193a12065aac76916983cb7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728305751696849456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rdnrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faeec35-44dd-4911-8d43-f94ba92ecedf,},Annotations:map[string]string{io.kubernetes.container.hash: b0f3bd07,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60731ccbbd1f45970a5f86ee65407c1a173b9375b34a4815782b4aea6978f85,PodSandboxId:bf19d700d8fecaad0cd8acf10e677c118313b82fe5409885d27089162a8154d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728305744432772124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e3f7478e-fa61-4331-aa87-bc73a227f7ef,},Annotations:map[string]string{io.kubernetes.container.hash: eadb8e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69e2ed988f794fd74dd57cfb05118dbbc676e88791f85e6b90cbb184b685ff2,PodSandboxId:a23d0edd5f652008c9de02a466a077f8341eb4dcc7a6734047e1f467294151d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728305744106849561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-777v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd
1e15a-ac71-46da-ba4a-7fc894cb87c2,},Annotations:map[string]string{io.kubernetes.container.hash: a7738a2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10c7c4db9cf76354e61d113f745594dd3817341b23444150b3b42e9f310fc7b,PodSandboxId:c2cc4459267036fcb4d0b8e536951630b822baad6b227a69c6c4b90909cf4f7c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728305737916772319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e7dd00697946ac17c219e7010a97c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 93864750,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a8a22b856db54c2bff0167ee523e16e5ad6bc47a8e2015dc8500f6a3ce9d5b,PodSandboxId:6646cca0fa084001d9b20d90790e2b1a48ec2491250aa77cb5f3b710acb1061c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728305737881076064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9320b6a98387072cc701e61
7838d232,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f68d30e5da2136de98885e2b659838e6ef911b7d9a07cb8244eea87c6549436,PodSandboxId:49c8220ff55115b1ac86d0d51cb69f64e38250cdf479fdffec8f55a9b4241890,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728305737946520397,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecab802cfcaa04e9cb17ceac7236b7b1,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebeca0f2c93047b31834021510886a09917e24cc562a6bfc1389626c1085007c,PodSandboxId:785707d29ed30f5ebc431bbead40e715028f4565be2755b241b69ca282345eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728305737825621975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 993865acd3dbe132da25c6c1d4813a7a,},Annotations
:map[string]string{io.kubernetes.container.hash: a5abea8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=417902f9-1e1b-405b-b56e-bb903cae2df3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.322811027Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2bce58c-484d-4a0f-8580-c6ece1d2bd39 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.322887378Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2bce58c-484d-4a0f-8580-c6ece1d2bd39 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.323981216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c9d11b2-30d6-41e7-b73b-5e941073a844 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.324605799Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728305756324578476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c9d11b2-30d6-41e7-b73b-5e941073a844 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.325182533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70bf69f3-c364-44c5-9044-1a9a1cd1d8a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.325301010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70bf69f3-c364-44c5-9044-1a9a1cd1d8a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.325483958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f95a55327aa67d32f2c3f5f315452ae93bc6f6ee2bf43bba2bdb5ff5cf576243,PodSandboxId:e159e590823804a638b9aa3563b9eb7f5c4326c4193a12065aac76916983cb7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728305751696849456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rdnrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faeec35-44dd-4911-8d43-f94ba92ecedf,},Annotations:map[string]string{io.kubernetes.container.hash: b0f3bd07,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60731ccbbd1f45970a5f86ee65407c1a173b9375b34a4815782b4aea6978f85,PodSandboxId:bf19d700d8fecaad0cd8acf10e677c118313b82fe5409885d27089162a8154d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728305744432772124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e3f7478e-fa61-4331-aa87-bc73a227f7ef,},Annotations:map[string]string{io.kubernetes.container.hash: eadb8e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69e2ed988f794fd74dd57cfb05118dbbc676e88791f85e6b90cbb184b685ff2,PodSandboxId:a23d0edd5f652008c9de02a466a077f8341eb4dcc7a6734047e1f467294151d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728305744106849561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-777v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd
1e15a-ac71-46da-ba4a-7fc894cb87c2,},Annotations:map[string]string{io.kubernetes.container.hash: a7738a2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10c7c4db9cf76354e61d113f745594dd3817341b23444150b3b42e9f310fc7b,PodSandboxId:c2cc4459267036fcb4d0b8e536951630b822baad6b227a69c6c4b90909cf4f7c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728305737916772319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e7dd00697946ac17c219e7010a97c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 93864750,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a8a22b856db54c2bff0167ee523e16e5ad6bc47a8e2015dc8500f6a3ce9d5b,PodSandboxId:6646cca0fa084001d9b20d90790e2b1a48ec2491250aa77cb5f3b710acb1061c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728305737881076064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9320b6a98387072cc701e61
7838d232,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f68d30e5da2136de98885e2b659838e6ef911b7d9a07cb8244eea87c6549436,PodSandboxId:49c8220ff55115b1ac86d0d51cb69f64e38250cdf479fdffec8f55a9b4241890,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728305737946520397,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecab802cfcaa04e9cb17ceac7236b7b1,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebeca0f2c93047b31834021510886a09917e24cc562a6bfc1389626c1085007c,PodSandboxId:785707d29ed30f5ebc431bbead40e715028f4565be2755b241b69ca282345eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728305737825621975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 993865acd3dbe132da25c6c1d4813a7a,},Annotations
:map[string]string{io.kubernetes.container.hash: a5abea8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70bf69f3-c364-44c5-9044-1a9a1cd1d8a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.333428957Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=18027fa9-9090-432d-9cb5-d43f1abb0784 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.333668718Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e159e590823804a638b9aa3563b9eb7f5c4326c4193a12065aac76916983cb7a,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-rdnrz,Uid:3faeec35-44dd-4911-8d43-f94ba92ecedf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728305751438020375,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-rdnrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faeec35-44dd-4911-8d43-f94ba92ecedf,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:55:43.078293496Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bf19d700d8fecaad0cd8acf10e677c118313b82fe5409885d27089162a8154d9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e3f7478e-fa61-4331-aa87-bc73a227f7ef,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728305744287019601,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3f7478e-fa61-4331-aa87-bc73a227f7ef,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-07T12:55:43.078290054Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a23d0edd5f652008c9de02a466a077f8341eb4dcc7a6734047e1f467294151d3,Metadata:&PodSandboxMetadata{Name:kube-proxy-777v2,Uid:fbd1e15a-ac71-46da-ba4a-7fc894cb87c2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728305743995125841,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-777v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd1e15a-ac71-46da-ba4a-7fc894cb87c2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-07T12:55:43.078299236Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2cc4459267036fcb4d0b8e536951630b822baad6b227a69c6c4b90909cf4f7c,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-474339,Uid:e47e7dd00697946ac
17c219e7010a97c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728305737662496444,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e7dd00697946ac17c219e7010a97c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.79:2379,kubernetes.io/config.hash: e47e7dd00697946ac17c219e7010a97c,kubernetes.io/config.seen: 2024-10-07T12:55:37.165745984Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:49c8220ff55115b1ac86d0d51cb69f64e38250cdf479fdffec8f55a9b4241890,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-474339,Uid:ecab802cfcaa04e9cb17ceac7236b7b1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728305737661735226,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-pre
load-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecab802cfcaa04e9cb17ceac7236b7b1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ecab802cfcaa04e9cb17ceac7236b7b1,kubernetes.io/config.seen: 2024-10-07T12:55:37.077600261Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:785707d29ed30f5ebc431bbead40e715028f4565be2755b241b69ca282345eb0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-474339,Uid:993865acd3dbe132da25c6c1d4813a7a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728305737655614259,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 993865acd3dbe132da25c6c1d4813a7a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.79:8443,kubernetes.io/config.hash: 993865acd3dbe132d
a25c6c1d4813a7a,kubernetes.io/config.seen: 2024-10-07T12:55:37.077567259Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6646cca0fa084001d9b20d90790e2b1a48ec2491250aa77cb5f3b710acb1061c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-474339,Uid:c9320b6a98387072cc701e617838d232,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728305737648424348,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9320b6a98387072cc701e617838d232,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c9320b6a98387072cc701e617838d232,kubernetes.io/config.seen: 2024-10-07T12:55:37.077599052Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=18027fa9-9090-432d-9cb5-d43f1abb0784 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.334457822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5538441-c9a3-4837-9980-a1ef2e6b60cd name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.334534265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5538441-c9a3-4837-9980-a1ef2e6b60cd name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.334694932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f95a55327aa67d32f2c3f5f315452ae93bc6f6ee2bf43bba2bdb5ff5cf576243,PodSandboxId:e159e590823804a638b9aa3563b9eb7f5c4326c4193a12065aac76916983cb7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728305751696849456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rdnrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faeec35-44dd-4911-8d43-f94ba92ecedf,},Annotations:map[string]string{io.kubernetes.container.hash: b0f3bd07,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60731ccbbd1f45970a5f86ee65407c1a173b9375b34a4815782b4aea6978f85,PodSandboxId:bf19d700d8fecaad0cd8acf10e677c118313b82fe5409885d27089162a8154d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728305744432772124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e3f7478e-fa61-4331-aa87-bc73a227f7ef,},Annotations:map[string]string{io.kubernetes.container.hash: eadb8e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69e2ed988f794fd74dd57cfb05118dbbc676e88791f85e6b90cbb184b685ff2,PodSandboxId:a23d0edd5f652008c9de02a466a077f8341eb4dcc7a6734047e1f467294151d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728305744106849561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-777v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd
1e15a-ac71-46da-ba4a-7fc894cb87c2,},Annotations:map[string]string{io.kubernetes.container.hash: a7738a2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10c7c4db9cf76354e61d113f745594dd3817341b23444150b3b42e9f310fc7b,PodSandboxId:c2cc4459267036fcb4d0b8e536951630b822baad6b227a69c6c4b90909cf4f7c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728305737916772319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e7dd00697946ac17c219e7010a97c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 93864750,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a8a22b856db54c2bff0167ee523e16e5ad6bc47a8e2015dc8500f6a3ce9d5b,PodSandboxId:6646cca0fa084001d9b20d90790e2b1a48ec2491250aa77cb5f3b710acb1061c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728305737881076064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9320b6a98387072cc701e61
7838d232,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f68d30e5da2136de98885e2b659838e6ef911b7d9a07cb8244eea87c6549436,PodSandboxId:49c8220ff55115b1ac86d0d51cb69f64e38250cdf479fdffec8f55a9b4241890,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728305737946520397,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecab802cfcaa04e9cb17ceac7236b7b1,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebeca0f2c93047b31834021510886a09917e24cc562a6bfc1389626c1085007c,PodSandboxId:785707d29ed30f5ebc431bbead40e715028f4565be2755b241b69ca282345eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728305737825621975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 993865acd3dbe132da25c6c1d4813a7a,},Annotations
:map[string]string{io.kubernetes.container.hash: a5abea8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5538441-c9a3-4837-9980-a1ef2e6b60cd name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.370383459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf781391-def8-4445-af29-ed12eb5baa62 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.370480337Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf781391-def8-4445-af29-ed12eb5baa62 name=/runtime.v1.RuntimeService/Version
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.375912755Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e45a7829-74d6-4423-b500-618962b7be1e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.376493829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728305756376467176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e45a7829-74d6-4423-b500-618962b7be1e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.377426962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10217bc7-bcfe-461b-b4f5-255ac2b08d27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.377483586Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10217bc7-bcfe-461b-b4f5-255ac2b08d27 name=/runtime.v1.RuntimeService/ListContainers
	Oct 07 12:55:56 test-preload-474339 crio[691]: time="2024-10-07 12:55:56.377641813Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f95a55327aa67d32f2c3f5f315452ae93bc6f6ee2bf43bba2bdb5ff5cf576243,PodSandboxId:e159e590823804a638b9aa3563b9eb7f5c4326c4193a12065aac76916983cb7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728305751696849456,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-rdnrz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3faeec35-44dd-4911-8d43-f94ba92ecedf,},Annotations:map[string]string{io.kubernetes.container.hash: b0f3bd07,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60731ccbbd1f45970a5f86ee65407c1a173b9375b34a4815782b4aea6978f85,PodSandboxId:bf19d700d8fecaad0cd8acf10e677c118313b82fe5409885d27089162a8154d9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728305744432772124,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: e3f7478e-fa61-4331-aa87-bc73a227f7ef,},Annotations:map[string]string{io.kubernetes.container.hash: eadb8e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69e2ed988f794fd74dd57cfb05118dbbc676e88791f85e6b90cbb184b685ff2,PodSandboxId:a23d0edd5f652008c9de02a466a077f8341eb4dcc7a6734047e1f467294151d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728305744106849561,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-777v2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fbd
1e15a-ac71-46da-ba4a-7fc894cb87c2,},Annotations:map[string]string{io.kubernetes.container.hash: a7738a2b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d10c7c4db9cf76354e61d113f745594dd3817341b23444150b3b42e9f310fc7b,PodSandboxId:c2cc4459267036fcb4d0b8e536951630b822baad6b227a69c6c4b90909cf4f7c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728305737916772319,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e47e7dd00697946ac17c219e7010a97c,},Annot
ations:map[string]string{io.kubernetes.container.hash: 93864750,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a8a22b856db54c2bff0167ee523e16e5ad6bc47a8e2015dc8500f6a3ce9d5b,PodSandboxId:6646cca0fa084001d9b20d90790e2b1a48ec2491250aa77cb5f3b710acb1061c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728305737881076064,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9320b6a98387072cc701e61
7838d232,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f68d30e5da2136de98885e2b659838e6ef911b7d9a07cb8244eea87c6549436,PodSandboxId:49c8220ff55115b1ac86d0d51cb69f64e38250cdf479fdffec8f55a9b4241890,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728305737946520397,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecab802cfcaa04e9cb17ceac7236b7b1,},
Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebeca0f2c93047b31834021510886a09917e24cc562a6bfc1389626c1085007c,PodSandboxId:785707d29ed30f5ebc431bbead40e715028f4565be2755b241b69ca282345eb0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728305737825621975,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-474339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 993865acd3dbe132da25c6c1d4813a7a,},Annotations
:map[string]string{io.kubernetes.container.hash: a5abea8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10217bc7-bcfe-461b-b4f5-255ac2b08d27 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f95a55327aa67       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   4 seconds ago       Running             coredns                   1                   e159e59082380       coredns-6d4b75cb6d-rdnrz
	d60731ccbbd1f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Running             storage-provisioner       1                   bf19d700d8fec       storage-provisioner
	c69e2ed988f79       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   a23d0edd5f652       kube-proxy-777v2
	1f68d30e5da21       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   18 seconds ago      Running             kube-scheduler            1                   49c8220ff5511       kube-scheduler-test-preload-474339
	d10c7c4db9cf7       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   18 seconds ago      Running             etcd                      1                   c2cc445926703       etcd-test-preload-474339
	18a8a22b856db       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   18 seconds ago      Running             kube-controller-manager   1                   6646cca0fa084       kube-controller-manager-test-preload-474339
	ebeca0f2c9304       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            1                   785707d29ed30       kube-apiserver-test-preload-474339
	
	
	==> coredns [f95a55327aa67d32f2c3f5f315452ae93bc6f6ee2bf43bba2bdb5ff5cf576243] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:38445 - 2087 "HINFO IN 8627282939292849938.1406128548808257343. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056887331s
	
	
	==> describe nodes <==
	Name:               test-preload-474339
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-474339
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b
	                    minikube.k8s.io/name=test-preload-474339
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_54_30_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:54:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-474339
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:55:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:55:52 +0000   Mon, 07 Oct 2024 12:54:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:55:52 +0000   Mon, 07 Oct 2024 12:54:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:55:52 +0000   Mon, 07 Oct 2024 12:54:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:55:52 +0000   Mon, 07 Oct 2024 12:55:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.79
	  Hostname:    test-preload-474339
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a61fc9a96e1243d8925369276e99a6d1
	  System UUID:                a61fc9a9-6e12-43d8-9253-69276e99a6d1
	  Boot ID:                    c108d9c9-a95e-4727-ad6b-1b9cd7ebd164
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-rdnrz                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     73s
	  kube-system                 etcd-test-preload-474339                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         88s
	  kube-system                 kube-apiserver-test-preload-474339             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-test-preload-474339    200m (10%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-proxy-777v2                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-test-preload-474339             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  Starting                 71s                kube-proxy       
	  Normal  Starting                 86s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  86s                kubelet          Node test-preload-474339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s                kubelet          Node test-preload-474339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s                kubelet          Node test-preload-474339 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                76s                kubelet          Node test-preload-474339 status is now: NodeReady
	  Normal  RegisteredNode           74s                node-controller  Node test-preload-474339 event: Registered Node test-preload-474339 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-474339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-474339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-474339 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node test-preload-474339 event: Registered Node test-preload-474339 in Controller
	
	
	==> dmesg <==
	[Oct 7 12:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051406] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040556] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.876993] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.849099] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.607342] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.992136] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.059755] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062440] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.166724] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.157893] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.306489] systemd-fstab-generator[682]: Ignoring "noauto" option for root device
	[ +13.149587] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.064926] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.651095] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +6.719765] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.878592] systemd-fstab-generator[1769]: Ignoring "noauto" option for root device
	[  +5.974420] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [d10c7c4db9cf76354e61d113f745594dd3817341b23444150b3b42e9f310fc7b] <==
	{"level":"info","ts":"2024-10-07T12:55:38.392Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"a91a1bbc2c758cdc","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-07T12:55:38.394Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-07T12:55:38.399Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-07T12:55:38.400Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a91a1bbc2c758cdc switched to configuration voters=(12185082236818001116)"}
	{"level":"info","ts":"2024-10-07T12:55:38.401Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1edb09d3fc38073e","local-member-id":"a91a1bbc2c758cdc","added-peer-id":"a91a1bbc2c758cdc","added-peer-peer-urls":["https://192.168.39.79:2380"]}
	{"level":"info","ts":"2024-10-07T12:55:38.401Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1edb09d3fc38073e","local-member-id":"a91a1bbc2c758cdc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:55:38.405Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:55:38.400Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.79:2380"}
	{"level":"info","ts":"2024-10-07T12:55:38.411Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.79:2380"}
	{"level":"info","ts":"2024-10-07T12:55:38.411Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"a91a1bbc2c758cdc","initial-advertise-peer-urls":["https://192.168.39.79:2380"],"listen-peer-urls":["https://192.168.39.79:2380"],"advertise-client-urls":["https://192.168.39.79:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.79:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-07T12:55:38.413Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T12:55:39.941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a91a1bbc2c758cdc is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-07T12:55:39.941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a91a1bbc2c758cdc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-07T12:55:39.941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a91a1bbc2c758cdc received MsgPreVoteResp from a91a1bbc2c758cdc at term 2"}
	{"level":"info","ts":"2024-10-07T12:55:39.941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a91a1bbc2c758cdc became candidate at term 3"}
	{"level":"info","ts":"2024-10-07T12:55:39.941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a91a1bbc2c758cdc received MsgVoteResp from a91a1bbc2c758cdc at term 3"}
	{"level":"info","ts":"2024-10-07T12:55:39.941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a91a1bbc2c758cdc became leader at term 3"}
	{"level":"info","ts":"2024-10-07T12:55:39.941Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a91a1bbc2c758cdc elected leader a91a1bbc2c758cdc at term 3"}
	{"level":"info","ts":"2024-10-07T12:55:39.946Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"a91a1bbc2c758cdc","local-member-attributes":"{Name:test-preload-474339 ClientURLs:[https://192.168.39.79:2379]}","request-path":"/0/members/a91a1bbc2c758cdc/attributes","cluster-id":"1edb09d3fc38073e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T12:55:39.946Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T12:55:39.947Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T12:55:39.947Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T12:55:39.947Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T12:55:39.948Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T12:55:39.948Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.79:2379"}
	
	
	==> kernel <==
	 12:55:56 up 0 min,  0 users,  load average: 0.37, 0.13, 0.05
	Linux test-preload-474339 5.10.207 #1 SMP Mon Sep 23 21:01:39 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ebeca0f2c93047b31834021510886a09917e24cc562a6bfc1389626c1085007c] <==
	I1007 12:55:42.469525       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1007 12:55:42.469541       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1007 12:55:42.487064       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1007 12:55:42.521934       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1007 12:55:42.488278       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I1007 12:55:42.521949       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	E1007 12:55:42.616259       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1007 12:55:42.620867       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1007 12:55:42.621495       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1007 12:55:42.623525       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1007 12:55:42.627545       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1007 12:55:42.627746       1 cache.go:39] Caches are synced for autoregister controller
	I1007 12:55:42.635615       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1007 12:55:42.661979       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1007 12:55:42.662927       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 12:55:43.128312       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1007 12:55:43.461398       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1007 12:55:43.882687       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1007 12:55:43.897543       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1007 12:55:43.939543       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1007 12:55:43.969049       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1007 12:55:43.977382       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1007 12:55:44.484981       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1007 12:55:55.535837       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 12:55:55.547736       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [18a8a22b856db54c2bff0167ee523e16e5ad6bc47a8e2015dc8500f6a3ce9d5b] <==
	I1007 12:55:55.486781       1 disruption.go:371] Sending events to api server.
	I1007 12:55:55.486894       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1007 12:55:55.519333       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1007 12:55:55.519416       1 shared_informer.go:262] Caches are synced for stateful set
	I1007 12:55:55.520790       1 shared_informer.go:262] Caches are synced for ephemeral
	I1007 12:55:55.523455       1 shared_informer.go:262] Caches are synced for GC
	I1007 12:55:55.526823       1 shared_informer.go:262] Caches are synced for deployment
	I1007 12:55:55.529650       1 shared_informer.go:262] Caches are synced for endpoint
	I1007 12:55:55.557052       1 shared_informer.go:262] Caches are synced for daemon sets
	I1007 12:55:55.557514       1 shared_informer.go:262] Caches are synced for taint
	I1007 12:55:55.557901       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1007 12:55:55.558481       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-474339. Assuming now as a timestamp.
	I1007 12:55:55.558654       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1007 12:55:55.559062       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1007 12:55:55.560629       1 event.go:294] "Event occurred" object="test-preload-474339" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-474339 event: Registered Node test-preload-474339 in Controller"
	I1007 12:55:55.600357       1 shared_informer.go:262] Caches are synced for resource quota
	I1007 12:55:55.641082       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1007 12:55:55.679098       1 shared_informer.go:262] Caches are synced for resource quota
	I1007 12:55:55.719997       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1007 12:55:55.720120       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1007 12:55:55.720152       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1007 12:55:55.720172       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1007 12:55:56.118614       1 shared_informer.go:262] Caches are synced for garbage collector
	I1007 12:55:56.118661       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1007 12:55:56.150097       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [c69e2ed988f794fd74dd57cfb05118dbbc676e88791f85e6b90cbb184b685ff2] <==
	I1007 12:55:44.409391       1 node.go:163] Successfully retrieved node IP: 192.168.39.79
	I1007 12:55:44.409595       1 server_others.go:138] "Detected node IP" address="192.168.39.79"
	I1007 12:55:44.409709       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1007 12:55:44.472493       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1007 12:55:44.472582       1 server_others.go:206] "Using iptables Proxier"
	I1007 12:55:44.473018       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1007 12:55:44.474398       1 server.go:661] "Version info" version="v1.24.4"
	I1007 12:55:44.474485       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:55:44.476553       1 config.go:317] "Starting service config controller"
	I1007 12:55:44.476671       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1007 12:55:44.476792       1 config.go:226] "Starting endpoint slice config controller"
	I1007 12:55:44.476889       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1007 12:55:44.478325       1 config.go:444] "Starting node config controller"
	I1007 12:55:44.478477       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1007 12:55:44.577181       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1007 12:55:44.577299       1 shared_informer.go:262] Caches are synced for service config
	I1007 12:55:44.579320       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [1f68d30e5da2136de98885e2b659838e6ef911b7d9a07cb8244eea87c6549436] <==
	I1007 12:55:39.230944       1 serving.go:348] Generated self-signed cert in-memory
	W1007 12:55:42.506872       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1007 12:55:42.506945       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 12:55:42.506962       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1007 12:55:42.506971       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1007 12:55:42.624162       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1007 12:55:42.624250       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:55:42.636748       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1007 12:55:42.640259       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1007 12:55:42.640932       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 12:55:42.641082       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1007 12:55:42.742371       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.152417    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fbd1e15a-ac71-46da-ba4a-7fc894cb87c2-kube-proxy\") pod \"kube-proxy-777v2\" (UID: \"fbd1e15a-ac71-46da-ba4a-7fc894cb87c2\") " pod="kube-system/kube-proxy-777v2"
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.152438    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmnwf\" (UniqueName: \"kubernetes.io/projected/3faeec35-44dd-4911-8d43-f94ba92ecedf-kube-api-access-rmnwf\") pod \"coredns-6d4b75cb6d-rdnrz\" (UID: \"3faeec35-44dd-4911-8d43-f94ba92ecedf\") " pod="kube-system/coredns-6d4b75cb6d-rdnrz"
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.152462    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbd1e15a-ac71-46da-ba4a-7fc894cb87c2-xtables-lock\") pod \"kube-proxy-777v2\" (UID: \"fbd1e15a-ac71-46da-ba4a-7fc894cb87c2\") " pod="kube-system/kube-proxy-777v2"
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.152479    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbd1e15a-ac71-46da-ba4a-7fc894cb87c2-lib-modules\") pod \"kube-proxy-777v2\" (UID: \"fbd1e15a-ac71-46da-ba4a-7fc894cb87c2\") " pod="kube-system/kube-proxy-777v2"
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.152512    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsj6x\" (UniqueName: \"kubernetes.io/projected/fbd1e15a-ac71-46da-ba4a-7fc894cb87c2-kube-api-access-nsj6x\") pod \"kube-proxy-777v2\" (UID: \"fbd1e15a-ac71-46da-ba4a-7fc894cb87c2\") " pod="kube-system/kube-proxy-777v2"
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.152540    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3faeec35-44dd-4911-8d43-f94ba92ecedf-config-volume\") pod \"coredns-6d4b75cb6d-rdnrz\" (UID: \"3faeec35-44dd-4911-8d43-f94ba92ecedf\") " pod="kube-system/coredns-6d4b75cb6d-rdnrz"
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.152557    1145 reconciler.go:159] "Reconciler: start to sync state"
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.639047    1145 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bmvf\" (UniqueName: \"kubernetes.io/projected/426a22eb-70a1-417f-972d-33fdf72fac11-kube-api-access-9bmvf\") pod \"426a22eb-70a1-417f-972d-33fdf72fac11\" (UID: \"426a22eb-70a1-417f-972d-33fdf72fac11\") "
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.639105    1145 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/426a22eb-70a1-417f-972d-33fdf72fac11-config-volume\") pod \"426a22eb-70a1-417f-972d-33fdf72fac11\" (UID: \"426a22eb-70a1-417f-972d-33fdf72fac11\") "
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: E1007 12:55:43.640316    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: E1007 12:55:43.640560    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3faeec35-44dd-4911-8d43-f94ba92ecedf-config-volume podName:3faeec35-44dd-4911-8d43-f94ba92ecedf nodeName:}" failed. No retries permitted until 2024-10-07 12:55:44.140451869 +0000 UTC m=+7.182646624 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3faeec35-44dd-4911-8d43-f94ba92ecedf-config-volume") pod "coredns-6d4b75cb6d-rdnrz" (UID: "3faeec35-44dd-4911-8d43-f94ba92ecedf") : object "kube-system"/"coredns" not registered
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: W1007 12:55:43.641504    1145 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/426a22eb-70a1-417f-972d-33fdf72fac11/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.641989    1145 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/426a22eb-70a1-417f-972d-33fdf72fac11-config-volume" (OuterVolumeSpecName: "config-volume") pod "426a22eb-70a1-417f-972d-33fdf72fac11" (UID: "426a22eb-70a1-417f-972d-33fdf72fac11"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: W1007 12:55:43.642276    1145 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/426a22eb-70a1-417f-972d-33fdf72fac11/volumes/kubernetes.io~projected/kube-api-access-9bmvf: clearQuota called, but quotas disabled
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.642699    1145 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/426a22eb-70a1-417f-972d-33fdf72fac11-kube-api-access-9bmvf" (OuterVolumeSpecName: "kube-api-access-9bmvf") pod "426a22eb-70a1-417f-972d-33fdf72fac11" (UID: "426a22eb-70a1-417f-972d-33fdf72fac11"). InnerVolumeSpecName "kube-api-access-9bmvf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.739471    1145 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/426a22eb-70a1-417f-972d-33fdf72fac11-config-volume\") on node \"test-preload-474339\" DevicePath \"\""
	Oct 07 12:55:43 test-preload-474339 kubelet[1145]: I1007 12:55:43.739517    1145 reconciler.go:384] "Volume detached for volume \"kube-api-access-9bmvf\" (UniqueName: \"kubernetes.io/projected/426a22eb-70a1-417f-972d-33fdf72fac11-kube-api-access-9bmvf\") on node \"test-preload-474339\" DevicePath \"\""
	Oct 07 12:55:44 test-preload-474339 kubelet[1145]: E1007 12:55:44.142670    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 07 12:55:44 test-preload-474339 kubelet[1145]: E1007 12:55:44.142732    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3faeec35-44dd-4911-8d43-f94ba92ecedf-config-volume podName:3faeec35-44dd-4911-8d43-f94ba92ecedf nodeName:}" failed. No retries permitted until 2024-10-07 12:55:45.142718246 +0000 UTC m=+8.184912986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3faeec35-44dd-4911-8d43-f94ba92ecedf-config-volume") pod "coredns-6d4b75cb6d-rdnrz" (UID: "3faeec35-44dd-4911-8d43-f94ba92ecedf") : object "kube-system"/"coredns" not registered
	Oct 07 12:55:45 test-preload-474339 kubelet[1145]: E1007 12:55:45.151477    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 07 12:55:45 test-preload-474339 kubelet[1145]: E1007 12:55:45.152079    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3faeec35-44dd-4911-8d43-f94ba92ecedf-config-volume podName:3faeec35-44dd-4911-8d43-f94ba92ecedf nodeName:}" failed. No retries permitted until 2024-10-07 12:55:47.15200436 +0000 UTC m=+10.194199115 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3faeec35-44dd-4911-8d43-f94ba92ecedf-config-volume") pod "coredns-6d4b75cb6d-rdnrz" (UID: "3faeec35-44dd-4911-8d43-f94ba92ecedf") : object "kube-system"/"coredns" not registered
	Oct 07 12:55:45 test-preload-474339 kubelet[1145]: E1007 12:55:45.222373    1145 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-rdnrz" podUID=3faeec35-44dd-4911-8d43-f94ba92ecedf
	Oct 07 12:55:45 test-preload-474339 kubelet[1145]: I1007 12:55:45.227675    1145 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=426a22eb-70a1-417f-972d-33fdf72fac11 path="/var/lib/kubelet/pods/426a22eb-70a1-417f-972d-33fdf72fac11/volumes"
	Oct 07 12:55:47 test-preload-474339 kubelet[1145]: E1007 12:55:47.170327    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 07 12:55:47 test-preload-474339 kubelet[1145]: E1007 12:55:47.170413    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3faeec35-44dd-4911-8d43-f94ba92ecedf-config-volume podName:3faeec35-44dd-4911-8d43-f94ba92ecedf nodeName:}" failed. No retries permitted until 2024-10-07 12:55:51.170396559 +0000 UTC m=+14.212591314 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3faeec35-44dd-4911-8d43-f94ba92ecedf-config-volume") pod "coredns-6d4b75cb6d-rdnrz" (UID: "3faeec35-44dd-4911-8d43-f94ba92ecedf") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [d60731ccbbd1f45970a5f86ee65407c1a173b9375b34a4815782b4aea6978f85] <==
	I1007 12:55:44.548140       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-474339 -n test-preload-474339
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-474339 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-474339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-474339
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-474339: (1.203091331s)
--- FAIL: TestPreload (157.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (384.54s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-415734 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-415734 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m39.190775395s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-415734] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-415734" primary control-plane node in "kubernetes-upgrade-415734" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:00:52.401695  430919 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:00:52.401829  430919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:00:52.401838  430919 out.go:358] Setting ErrFile to fd 2...
	I1007 13:00:52.401842  430919 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:00:52.402018  430919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 13:00:52.402649  430919 out.go:352] Setting JSON to false
	I1007 13:00:52.403727  430919 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9798,"bootTime":1728296254,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:00:52.403836  430919 start.go:139] virtualization: kvm guest
	I1007 13:00:52.406472  430919 out.go:177] * [kubernetes-upgrade-415734] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:00:52.408062  430919 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 13:00:52.408111  430919 notify.go:220] Checking for updates...
	I1007 13:00:52.410718  430919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:00:52.412084  430919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 13:00:52.413532  430919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 13:00:52.415171  430919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:00:52.416628  430919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:00:52.418734  430919 config.go:182] Loaded profile config "NoKubernetes-226737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:00:52.418911  430919 config.go:182] Loaded profile config "running-upgrade-872700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1007 13:00:52.419055  430919 config.go:182] Loaded profile config "stopped-upgrade-753355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1007 13:00:52.419181  430919 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:00:52.457772  430919 out.go:177] * Using the kvm2 driver based on user configuration
	I1007 13:00:52.459240  430919 start.go:297] selected driver: kvm2
	I1007 13:00:52.459263  430919 start.go:901] validating driver "kvm2" against <nil>
	I1007 13:00:52.459281  430919 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:00:52.460238  430919 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:00:52.460337  430919 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:00:52.476921  430919 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:00:52.476978  430919 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 13:00:52.477269  430919 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 13:00:52.477301  430919 cni.go:84] Creating CNI manager for ""
	I1007 13:00:52.477350  430919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:00:52.477362  430919 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 13:00:52.477412  430919 start.go:340] cluster config:
	{Name:kubernetes-upgrade-415734 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-415734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:00:52.477527  430919 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:00:52.479652  430919 out.go:177] * Starting "kubernetes-upgrade-415734" primary control-plane node in "kubernetes-upgrade-415734" cluster
	I1007 13:00:52.481047  430919 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 13:00:52.481114  430919 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1007 13:00:52.481125  430919 cache.go:56] Caching tarball of preloaded images
	I1007 13:00:52.481227  430919 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:00:52.481241  430919 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1007 13:00:52.481380  430919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/config.json ...
	I1007 13:00:52.481428  430919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/config.json: {Name:mka84f7a09efdd453932481d4001e3ec97533b9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:00:52.481601  430919 start.go:360] acquireMachinesLock for kubernetes-upgrade-415734: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:00:58.081224  430919 start.go:364] duration metric: took 5.599593491s to acquireMachinesLock for "kubernetes-upgrade-415734"
	I1007 13:00:58.081306  430919 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-415734 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-415734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:00:58.081442  430919 start.go:125] createHost starting for "" (driver="kvm2")
	I1007 13:00:58.083462  430919 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 13:00:58.083675  430919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:00:58.083739  430919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:00:58.101683  430919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33933
	I1007 13:00:58.102236  430919 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:00:58.102830  430919 main.go:141] libmachine: Using API Version  1
	I1007 13:00:58.102860  430919 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:00:58.103284  430919 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:00:58.103516  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetMachineName
	I1007 13:00:58.103690  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:00:58.103876  430919 start.go:159] libmachine.API.Create for "kubernetes-upgrade-415734" (driver="kvm2")
	I1007 13:00:58.103920  430919 client.go:168] LocalClient.Create starting
	I1007 13:00:58.103973  430919 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem
	I1007 13:00:58.104017  430919 main.go:141] libmachine: Decoding PEM data...
	I1007 13:00:58.104040  430919 main.go:141] libmachine: Parsing certificate...
	I1007 13:00:58.104109  430919 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem
	I1007 13:00:58.104136  430919 main.go:141] libmachine: Decoding PEM data...
	I1007 13:00:58.104190  430919 main.go:141] libmachine: Parsing certificate...
	I1007 13:00:58.104213  430919 main.go:141] libmachine: Running pre-create checks...
	I1007 13:00:58.104227  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .PreCreateCheck
	I1007 13:00:58.104654  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetConfigRaw
	I1007 13:00:58.105051  430919 main.go:141] libmachine: Creating machine...
	I1007 13:00:58.105066  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .Create
	I1007 13:00:58.105173  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Creating KVM machine...
	I1007 13:00:58.106520  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found existing default KVM network
	I1007 13:00:58.108886  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:00:58.108676  431055 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:51:97:77} reservation:<nil>}
	I1007 13:00:58.110244  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:00:58.110134  431055 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a4a50}
	I1007 13:00:58.110336  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | created network xml: 
	I1007 13:00:58.110363  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | <network>
	I1007 13:00:58.110379  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG |   <name>mk-kubernetes-upgrade-415734</name>
	I1007 13:00:58.110399  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG |   <dns enable='no'/>
	I1007 13:00:58.110409  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG |   
	I1007 13:00:58.110439  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1007 13:00:58.110459  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG |     <dhcp>
	I1007 13:00:58.110470  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1007 13:00:58.110479  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG |     </dhcp>
	I1007 13:00:58.110496  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG |   </ip>
	I1007 13:00:58.110524  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG |   
	I1007 13:00:58.110564  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | </network>
	I1007 13:00:58.110580  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | 
	I1007 13:00:58.116381  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | trying to create private KVM network mk-kubernetes-upgrade-415734 192.168.50.0/24...
	I1007 13:00:58.209532  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | private KVM network mk-kubernetes-upgrade-415734 192.168.50.0/24 created
	I1007 13:00:58.209571  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Setting up store path in /home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734 ...
	I1007 13:00:58.209587  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:00:58.209338  431055 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 13:00:58.209600  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Building disk image from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 13:00:58.209622  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Downloading /home/jenkins/minikube-integration/19763-377026/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso...
	I1007 13:00:58.516289  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:00:58.516101  431055 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/id_rsa...
	I1007 13:00:58.640496  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:00:58.640340  431055 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/kubernetes-upgrade-415734.rawdisk...
	I1007 13:00:58.640536  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Writing magic tar header
	I1007 13:00:58.640559  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Writing SSH key tar header
	I1007 13:00:58.640567  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:00:58.640460  431055 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734 ...
	I1007 13:00:58.640583  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734
	I1007 13:00:58.640597  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734 (perms=drwx------)
	I1007 13:00:58.640617  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube/machines
	I1007 13:00:58.640629  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 13:00:58.640644  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube/machines (perms=drwxr-xr-x)
	I1007 13:00:58.640659  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026/.minikube (perms=drwxr-xr-x)
	I1007 13:00:58.640672  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Setting executable bit set on /home/jenkins/minikube-integration/19763-377026 (perms=drwxrwxr-x)
	I1007 13:00:58.640685  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19763-377026
	I1007 13:00:58.640702  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1007 13:00:58.640714  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Checking permissions on dir: /home/jenkins
	I1007 13:00:58.640726  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1007 13:00:58.640739  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1007 13:00:58.640749  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Creating domain...
	I1007 13:00:58.640758  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Checking permissions on dir: /home
	I1007 13:00:58.640792  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Skipping /home - not owner
	I1007 13:00:58.641842  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) define libvirt domain using xml: 
	I1007 13:00:58.641868  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) <domain type='kvm'>
	I1007 13:00:58.641879  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   <name>kubernetes-upgrade-415734</name>
	I1007 13:00:58.641893  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   <memory unit='MiB'>2200</memory>
	I1007 13:00:58.641906  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   <vcpu>2</vcpu>
	I1007 13:00:58.641914  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   <features>
	I1007 13:00:58.641922  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <acpi/>
	I1007 13:00:58.641929  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <apic/>
	I1007 13:00:58.641943  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <pae/>
	I1007 13:00:58.641950  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     
	I1007 13:00:58.641956  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   </features>
	I1007 13:00:58.641965  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   <cpu mode='host-passthrough'>
	I1007 13:00:58.641973  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   
	I1007 13:00:58.641981  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   </cpu>
	I1007 13:00:58.641989  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   <os>
	I1007 13:00:58.642000  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <type>hvm</type>
	I1007 13:00:58.642007  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <boot dev='cdrom'/>
	I1007 13:00:58.642013  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <boot dev='hd'/>
	I1007 13:00:58.642019  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <bootmenu enable='no'/>
	I1007 13:00:58.642024  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   </os>
	I1007 13:00:58.642030  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   <devices>
	I1007 13:00:58.642040  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <disk type='file' device='cdrom'>
	I1007 13:00:58.642056  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/boot2docker.iso'/>
	I1007 13:00:58.642067  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <target dev='hdc' bus='scsi'/>
	I1007 13:00:58.642075  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <readonly/>
	I1007 13:00:58.642097  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     </disk>
	I1007 13:00:58.642132  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <disk type='file' device='disk'>
	I1007 13:00:58.642155  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1007 13:00:58.642192  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <source file='/home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/kubernetes-upgrade-415734.rawdisk'/>
	I1007 13:00:58.642201  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <target dev='hda' bus='virtio'/>
	I1007 13:00:58.642211  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     </disk>
	I1007 13:00:58.642218  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <interface type='network'>
	I1007 13:00:58.642229  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <source network='mk-kubernetes-upgrade-415734'/>
	I1007 13:00:58.642236  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <model type='virtio'/>
	I1007 13:00:58.642245  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     </interface>
	I1007 13:00:58.642251  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <interface type='network'>
	I1007 13:00:58.642260  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <source network='default'/>
	I1007 13:00:58.642267  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <model type='virtio'/>
	I1007 13:00:58.642281  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     </interface>
	I1007 13:00:58.642288  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <serial type='pty'>
	I1007 13:00:58.642295  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <target port='0'/>
	I1007 13:00:58.642302  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     </serial>
	I1007 13:00:58.642311  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <console type='pty'>
	I1007 13:00:58.642318  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <target type='serial' port='0'/>
	I1007 13:00:58.642327  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     </console>
	I1007 13:00:58.642334  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     <rng model='virtio'>
	I1007 13:00:58.642343  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)       <backend model='random'>/dev/random</backend>
	I1007 13:00:58.642381  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     </rng>
	I1007 13:00:58.642393  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     
	I1007 13:00:58.642399  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)     
	I1007 13:00:58.642407  430919 main.go:141] libmachine: (kubernetes-upgrade-415734)   </devices>
	I1007 13:00:58.642413  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) </domain>
	I1007 13:00:58.642423  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) 
	I1007 13:00:58.646903  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:e0:28:44 in network default
	I1007 13:00:58.647793  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Ensuring networks are active...
	I1007 13:00:58.647823  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:00:58.648789  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Ensuring network default is active
	I1007 13:00:58.649130  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Ensuring network mk-kubernetes-upgrade-415734 is active
	I1007 13:00:58.649855  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Getting domain xml...
	I1007 13:00:58.650677  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Creating domain...
	I1007 13:01:00.776578  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Waiting to get IP...
	I1007 13:01:00.777660  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:00.778165  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:00.778201  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:00.778115  431055 retry.go:31] will retry after 259.952409ms: waiting for machine to come up
	I1007 13:01:01.039733  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:01.040296  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:01.040324  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:01.040251  431055 retry.go:31] will retry after 369.718067ms: waiting for machine to come up
	I1007 13:01:01.412148  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:01.412675  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:01.412710  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:01.412643  431055 retry.go:31] will retry after 385.224105ms: waiting for machine to come up
	I1007 13:01:01.799201  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:01.799831  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:01.799858  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:01.799791  431055 retry.go:31] will retry after 381.96473ms: waiting for machine to come up
	I1007 13:01:02.183664  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:02.184119  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:02.184181  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:02.184063  431055 retry.go:31] will retry after 660.966808ms: waiting for machine to come up
	I1007 13:01:02.847188  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:02.847622  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:02.847652  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:02.847594  431055 retry.go:31] will retry after 749.050417ms: waiting for machine to come up
	I1007 13:01:03.598330  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:03.598997  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:03.599035  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:03.598938  431055 retry.go:31] will retry after 1.030885071s: waiting for machine to come up
	I1007 13:01:04.631309  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:04.631910  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:04.631939  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:04.631837  431055 retry.go:31] will retry after 939.841903ms: waiting for machine to come up
	I1007 13:01:05.573475  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:05.573993  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:05.574030  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:05.573932  431055 retry.go:31] will retry after 1.389532287s: waiting for machine to come up
	I1007 13:01:06.965531  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:06.965976  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:06.966000  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:06.965919  431055 retry.go:31] will retry after 1.888051158s: waiting for machine to come up
	I1007 13:01:08.855260  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:08.855817  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:08.855847  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:08.855761  431055 retry.go:31] will retry after 2.044600017s: waiting for machine to come up
	I1007 13:01:10.902928  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:10.903498  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:10.903527  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:10.903452  431055 retry.go:31] will retry after 3.39171324s: waiting for machine to come up
	I1007 13:01:14.296679  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:14.297296  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:14.297325  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:14.297228  431055 retry.go:31] will retry after 3.153078852s: waiting for machine to come up
	I1007 13:01:17.452029  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:17.452475  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find current IP address of domain kubernetes-upgrade-415734 in network mk-kubernetes-upgrade-415734
	I1007 13:01:17.452508  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | I1007 13:01:17.452434  431055 retry.go:31] will retry after 5.593650618s: waiting for machine to come up
	I1007 13:01:23.048430  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.049098  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Found IP for machine: 192.168.50.141
	I1007 13:01:23.049142  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has current primary IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.049151  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Reserving static IP address...
	I1007 13:01:23.049648  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-415734", mac: "52:54:00:52:b1:08", ip: "192.168.50.141"} in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.135241  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Reserved static IP address: 192.168.50.141
	I1007 13:01:23.135275  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Waiting for SSH to be available...
	I1007 13:01:23.135330  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Getting to WaitForSSH function...
	I1007 13:01:23.138539  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.139034  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:23.139067  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.139208  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Using SSH client type: external
	I1007 13:01:23.139238  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/id_rsa (-rw-------)
	I1007 13:01:23.139279  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:01:23.139304  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | About to run SSH command:
	I1007 13:01:23.139318  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | exit 0
	I1007 13:01:23.271329  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | SSH cmd err, output: <nil>: 
	I1007 13:01:23.271618  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) KVM machine creation complete!
	I1007 13:01:23.271946  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetConfigRaw
	I1007 13:01:23.272729  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:01:23.272941  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:01:23.273114  430919 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 13:01:23.273131  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetState
	I1007 13:01:23.274723  430919 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 13:01:23.274756  430919 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 13:01:23.274765  430919 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 13:01:23.274774  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:01:23.277186  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.277532  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:23.277559  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.277694  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:01:23.277880  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:23.278061  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:23.278239  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:01:23.278435  430919 main.go:141] libmachine: Using SSH client type: native
	I1007 13:01:23.278644  430919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1007 13:01:23.278655  430919 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 13:01:23.398682  430919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:01:23.398712  430919 main.go:141] libmachine: Detecting the provisioner...
	I1007 13:01:23.398723  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:01:23.402092  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.402495  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:23.402526  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.402743  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:01:23.402980  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:23.403163  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:23.403336  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:01:23.403518  430919 main.go:141] libmachine: Using SSH client type: native
	I1007 13:01:23.403763  430919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1007 13:01:23.403780  430919 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 13:01:23.520043  430919 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 13:01:23.520134  430919 main.go:141] libmachine: found compatible host: buildroot
	I1007 13:01:23.520148  430919 main.go:141] libmachine: Provisioning with buildroot...
	I1007 13:01:23.520163  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetMachineName
	I1007 13:01:23.520448  430919 buildroot.go:166] provisioning hostname "kubernetes-upgrade-415734"
	I1007 13:01:23.520480  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetMachineName
	I1007 13:01:23.520684  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:01:23.523409  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.523789  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:23.523833  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.523929  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:01:23.524115  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:23.524288  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:23.524424  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:01:23.524579  430919 main.go:141] libmachine: Using SSH client type: native
	I1007 13:01:23.524840  430919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1007 13:01:23.524854  430919 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-415734 && echo "kubernetes-upgrade-415734" | sudo tee /etc/hostname
	I1007 13:01:23.655654  430919 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-415734
	
	I1007 13:01:23.655690  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:01:23.658447  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.658867  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:23.658898  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.659104  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:01:23.659292  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:23.659443  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:23.659554  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:01:23.659718  430919 main.go:141] libmachine: Using SSH client type: native
	I1007 13:01:23.659892  430919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1007 13:01:23.659908  430919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-415734' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-415734/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-415734' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:01:23.788715  430919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:01:23.788750  430919 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 13:01:23.788789  430919 buildroot.go:174] setting up certificates
	I1007 13:01:23.788802  430919 provision.go:84] configureAuth start
	I1007 13:01:23.788812  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetMachineName
	I1007 13:01:23.789146  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetIP
	I1007 13:01:23.791925  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.792365  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:23.792406  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.792576  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:01:23.794990  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.795395  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:23.795424  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:23.795639  430919 provision.go:143] copyHostCerts
	I1007 13:01:23.795696  430919 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 13:01:23.795720  430919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 13:01:23.795786  430919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 13:01:23.795919  430919 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 13:01:23.795932  430919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 13:01:23.795963  430919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 13:01:23.796037  430919 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 13:01:23.796047  430919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 13:01:23.796075  430919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 13:01:23.796142  430919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-415734 san=[127.0.0.1 192.168.50.141 kubernetes-upgrade-415734 localhost minikube]
	I1007 13:01:24.107039  430919 provision.go:177] copyRemoteCerts
	I1007 13:01:24.107102  430919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:01:24.107131  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:01:24.109553  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.109826  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:24.109859  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.110011  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:01:24.110227  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:24.110398  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:01:24.110531  430919 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/id_rsa Username:docker}
	I1007 13:01:24.199840  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1007 13:01:24.240773  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:01:24.267667  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:01:24.294415  430919 provision.go:87] duration metric: took 505.583039ms to configureAuth
	I1007 13:01:24.294450  430919 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:01:24.294617  430919 config.go:182] Loaded profile config "kubernetes-upgrade-415734": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1007 13:01:24.294705  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:01:24.297164  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.297458  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:24.297497  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.297679  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:01:24.297865  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:24.298031  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:24.298173  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:01:24.298346  430919 main.go:141] libmachine: Using SSH client type: native
	I1007 13:01:24.298546  430919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1007 13:01:24.298569  430919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:01:24.560720  430919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:01:24.560747  430919 main.go:141] libmachine: Checking connection to Docker...
	I1007 13:01:24.560756  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetURL
	I1007 13:01:24.562081  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | Using libvirt version 6000000
	I1007 13:01:24.564299  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.564617  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:24.564641  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.564811  430919 main.go:141] libmachine: Docker is up and running!
	I1007 13:01:24.564829  430919 main.go:141] libmachine: Reticulating splines...
	I1007 13:01:24.564837  430919 client.go:171] duration metric: took 26.460905496s to LocalClient.Create
	I1007 13:01:24.564860  430919 start.go:167] duration metric: took 26.46098714s to libmachine.API.Create "kubernetes-upgrade-415734"
	I1007 13:01:24.564870  430919 start.go:293] postStartSetup for "kubernetes-upgrade-415734" (driver="kvm2")
	I1007 13:01:24.564882  430919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:01:24.564904  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:01:24.565150  430919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:01:24.565179  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:01:24.567465  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.567821  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:24.567856  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.568029  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:01:24.568203  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:24.568353  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:01:24.568507  430919 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/id_rsa Username:docker}
	I1007 13:01:24.657680  430919 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:01:24.662692  430919 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:01:24.662725  430919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 13:01:24.662796  430919 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 13:01:24.662922  430919 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 13:01:24.663094  430919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:01:24.674059  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 13:01:24.700854  430919 start.go:296] duration metric: took 135.963313ms for postStartSetup
	I1007 13:01:24.700934  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetConfigRaw
	I1007 13:01:24.701632  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetIP
	I1007 13:01:24.704372  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.704733  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:24.704775  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.704961  430919 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/config.json ...
	I1007 13:01:24.705179  430919 start.go:128] duration metric: took 26.62371857s to createHost
	I1007 13:01:24.705221  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:01:24.707810  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.708210  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:24.708235  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.708425  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:01:24.708650  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:24.708832  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:24.708999  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:01:24.709173  430919 main.go:141] libmachine: Using SSH client type: native
	I1007 13:01:24.709387  430919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1007 13:01:24.709400  430919 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:01:24.828387  430919 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728306084.803508623
	
	I1007 13:01:24.828420  430919 fix.go:216] guest clock: 1728306084.803508623
	I1007 13:01:24.828432  430919 fix.go:229] Guest: 2024-10-07 13:01:24.803508623 +0000 UTC Remote: 2024-10-07 13:01:24.705194098 +0000 UTC m=+32.346126118 (delta=98.314525ms)
	I1007 13:01:24.828498  430919 fix.go:200] guest clock delta is within tolerance: 98.314525ms
	I1007 13:01:24.828511  430919 start.go:83] releasing machines lock for "kubernetes-upgrade-415734", held for 26.747242016s
	I1007 13:01:24.828549  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:01:24.828817  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetIP
	I1007 13:01:24.832039  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.832425  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:24.832461  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.832645  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:01:24.833213  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:01:24.833412  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:01:24.833524  430919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:01:24.833582  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:01:24.833634  430919 ssh_runner.go:195] Run: cat /version.json
	I1007 13:01:24.833660  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:01:24.836648  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.836942  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.837143  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:24.837175  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.837305  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:01:24.837395  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:24.837419  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:24.837534  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:24.837637  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:01:24.837731  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:01:24.837893  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:01:24.837886  430919 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/id_rsa Username:docker}
	I1007 13:01:24.838059  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:01:24.838170  430919 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/id_rsa Username:docker}
	I1007 13:01:24.924461  430919 ssh_runner.go:195] Run: systemctl --version
	I1007 13:01:24.946052  430919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:01:25.125205  430919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:01:25.132133  430919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:01:25.132252  430919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:01:25.152272  430919 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:01:25.152307  430919 start.go:495] detecting cgroup driver to use...
	I1007 13:01:25.152418  430919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:01:25.171685  430919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:01:25.189555  430919 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:01:25.189624  430919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:01:25.211474  430919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:01:25.233303  430919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:01:25.358554  430919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:01:25.504520  430919 docker.go:233] disabling docker service ...
	I1007 13:01:25.504596  430919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:01:25.520054  430919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:01:25.534034  430919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:01:25.676027  430919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:01:25.803698  430919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:01:25.818903  430919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:01:25.839077  430919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1007 13:01:25.839165  430919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:01:25.850944  430919 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:01:25.851032  430919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:01:25.862517  430919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:01:25.877139  430919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:01:25.889391  430919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:01:25.905201  430919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:01:25.918439  430919 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:01:25.918536  430919 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:01:25.935197  430919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:01:25.945909  430919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:01:26.085773  430919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:01:26.195001  430919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:01:26.195077  430919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:01:26.200573  430919 start.go:563] Will wait 60s for crictl version
	I1007 13:01:26.200646  430919 ssh_runner.go:195] Run: which crictl
	I1007 13:01:26.205050  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:01:26.255999  430919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:01:26.256097  430919 ssh_runner.go:195] Run: crio --version
	I1007 13:01:26.286300  430919 ssh_runner.go:195] Run: crio --version
	I1007 13:01:26.325026  430919 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1007 13:01:26.326711  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetIP
	I1007 13:01:26.329970  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:26.330442  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:01:14 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:01:26.330474  430919 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:01:26.330745  430919 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1007 13:01:26.335624  430919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:01:26.349737  430919 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-415734 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.20.0 ClusterName:kubernetes-upgrade-415734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:01:26.349890  430919 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 13:01:26.349964  430919 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:01:26.383724  430919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 13:01:26.383817  430919 ssh_runner.go:195] Run: which lz4
	I1007 13:01:26.388538  430919 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:01:26.393213  430919 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:01:26.393255  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1007 13:01:28.302638  430919 crio.go:462] duration metric: took 1.914141754s to copy over tarball
	I1007 13:01:28.302744  430919 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:01:31.396291  430919 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.093510068s)
	I1007 13:01:31.396329  430919 crio.go:469] duration metric: took 3.093643773s to extract the tarball
	I1007 13:01:31.396345  430919 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:01:31.444717  430919 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:01:31.501373  430919 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1007 13:01:31.501418  430919 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 13:01:31.501516  430919 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:01:31.501546  430919 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:01:31.501557  430919 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1007 13:01:31.501570  430919 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:01:31.501600  430919 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:01:31.501625  430919 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1007 13:01:31.501526  430919 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:01:31.501814  430919 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:01:31.503472  430919 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:01:31.503505  430919 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:01:31.503474  430919 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:01:31.503585  430919 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1007 13:01:31.503630  430919 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:01:31.503483  430919 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:01:31.503890  430919 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1007 13:01:31.504034  430919 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:01:31.658282  430919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:01:31.670655  430919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1007 13:01:31.694517  430919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:01:31.709174  430919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1007 13:01:31.729719  430919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1007 13:01:31.731769  430919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:01:31.743311  430919 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1007 13:01:31.743414  430919 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:01:31.743481  430919 ssh_runner.go:195] Run: which crictl
	I1007 13:01:31.744408  430919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:01:31.785253  430919 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1007 13:01:31.785305  430919 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1007 13:01:31.785357  430919 ssh_runner.go:195] Run: which crictl
	I1007 13:01:31.841974  430919 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1007 13:01:31.842024  430919 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:01:31.842072  430919 ssh_runner.go:195] Run: which crictl
	I1007 13:01:31.908354  430919 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1007 13:01:31.908404  430919 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1007 13:01:31.908455  430919 ssh_runner.go:195] Run: which crictl
	I1007 13:01:31.917650  430919 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1007 13:01:31.917705  430919 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1007 13:01:31.917712  430919 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1007 13:01:31.917728  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:01:31.917743  430919 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:01:31.917748  430919 ssh_runner.go:195] Run: which crictl
	I1007 13:01:31.917769  430919 ssh_runner.go:195] Run: which crictl
	I1007 13:01:31.917652  430919 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1007 13:01:31.917787  430919 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:01:31.917810  430919 ssh_runner.go:195] Run: which crictl
	I1007 13:01:31.917824  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:01:31.917829  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:01:31.917858  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:01:31.957298  430919 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:01:32.035981  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:01:32.036026  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:01:32.036133  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:01:32.036163  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:01:32.039713  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:01:32.039778  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:01:32.039834  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:01:32.217059  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:01:32.219933  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1007 13:01:32.235580  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:01:32.235612  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1007 13:01:32.235689  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:01:32.235727  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1007 13:01:32.235942  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1007 13:01:32.359731  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1007 13:01:32.359737  430919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1007 13:01:32.405921  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1007 13:01:32.411158  430919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1007 13:01:32.411219  430919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1007 13:01:32.411314  430919 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1007 13:01:32.411390  430919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1007 13:01:32.451698  430919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1007 13:01:32.473907  430919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1007 13:01:32.487941  430919 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1007 13:01:32.488028  430919 cache_images.go:92] duration metric: took 986.590179ms to LoadCachedImages
	W1007 13:01:32.488113  430919 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19763-377026/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1007 13:01:32.488132  430919 kubeadm.go:934] updating node { 192.168.50.141 8443 v1.20.0 crio true true} ...
	I1007 13:01:32.488255  430919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-415734 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-415734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:01:32.488337  430919 ssh_runner.go:195] Run: crio config
	I1007 13:01:32.553734  430919 cni.go:84] Creating CNI manager for ""
	I1007 13:01:32.553759  430919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:01:32.553770  430919 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:01:32.553788  430919 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.141 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-415734 NodeName:kubernetes-upgrade-415734 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1007 13:01:32.553954  430919 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-415734"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:01:32.554032  430919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1007 13:01:32.567302  430919 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:01:32.567404  430919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:01:32.580686  430919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1007 13:01:32.601542  430919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:01:32.621283  430919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1007 13:01:32.642197  430919 ssh_runner.go:195] Run: grep 192.168.50.141	control-plane.minikube.internal$ /etc/hosts
	I1007 13:01:32.646803  430919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:01:32.661230  430919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:01:32.808877  430919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:01:32.827769  430919 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734 for IP: 192.168.50.141
	I1007 13:01:32.827808  430919 certs.go:194] generating shared ca certs ...
	I1007 13:01:32.827833  430919 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:01:32.828047  430919 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 13:01:32.828109  430919 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 13:01:32.828124  430919 certs.go:256] generating profile certs ...
	I1007 13:01:32.828219  430919 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/client.key
	I1007 13:01:32.828239  430919 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/client.crt with IP's: []
	I1007 13:01:32.903832  430919 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/client.crt ...
	I1007 13:01:32.903880  430919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/client.crt: {Name:mkeaafceac00cc5221b7fc24bf8851dd2bb6aaf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:01:32.904149  430919 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/client.key ...
	I1007 13:01:32.904178  430919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/client.key: {Name:mk6a9208380a8b9a941a99af05490f2511aade0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:01:32.904341  430919 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.key.5df95fcf
	I1007 13:01:32.904373  430919 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.crt.5df95fcf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.141]
	I1007 13:01:32.999027  430919 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.crt.5df95fcf ...
	I1007 13:01:32.999062  430919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.crt.5df95fcf: {Name:mkc37b5bb57d5a3ae5f61ca46c94267121edf2c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:01:32.999290  430919 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.key.5df95fcf ...
	I1007 13:01:32.999308  430919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.key.5df95fcf: {Name:mk1bc82c1d4226c58e01fb1e010c3f3c4c9051e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:01:32.999413  430919 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.crt.5df95fcf -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.crt
	I1007 13:01:32.999529  430919 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.key.5df95fcf -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.key
	I1007 13:01:32.999619  430919 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/proxy-client.key
	I1007 13:01:32.999650  430919 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/proxy-client.crt with IP's: []
	I1007 13:01:33.249114  430919 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/proxy-client.crt ...
	I1007 13:01:33.249153  430919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/proxy-client.crt: {Name:mkdf9e6b567a3c4f86106740d8ab61c26c60d1f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:01:33.286117  430919 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/proxy-client.key ...
	I1007 13:01:33.286177  430919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/proxy-client.key: {Name:mka90e6c48ed9eac7791b7b78f2e665f18d81e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:01:33.286516  430919 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 13:01:33.286577  430919 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 13:01:33.286596  430919 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:01:33.286631  430919 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:01:33.286665  430919 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:01:33.286696  430919 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 13:01:33.286750  430919 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 13:01:33.287687  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:01:33.319824  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 13:01:33.346801  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:01:33.379539  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 13:01:33.409951  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 13:01:33.440334  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:01:33.472179  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:01:33.502076  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:01:33.531369  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 13:01:33.561372  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 13:01:33.589535  430919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:01:33.621651  430919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:01:33.646230  430919 ssh_runner.go:195] Run: openssl version
	I1007 13:01:33.654968  430919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:01:33.671580  430919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:01:33.679232  430919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:01:33.679316  430919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:01:33.688124  430919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:01:33.701045  430919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 13:01:33.717861  430919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 13:01:33.724865  430919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 13:01:33.724998  430919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 13:01:33.733561  430919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 13:01:33.753983  430919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 13:01:33.770882  430919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 13:01:33.777677  430919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 13:01:33.777754  430919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 13:01:33.785869  430919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:01:33.799372  430919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:01:33.805018  430919 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 13:01:33.805096  430919 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-415734 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.20.0 ClusterName:kubernetes-upgrade-415734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:01:33.805222  430919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:01:33.805291  430919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:01:33.869666  430919 cri.go:89] found id: ""
	I1007 13:01:33.869755  430919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:01:33.885069  430919 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:01:33.902382  430919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:01:33.914159  430919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:01:33.914192  430919 kubeadm.go:157] found existing configuration files:
	
	I1007 13:01:33.914252  430919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:01:33.925197  430919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:01:33.925271  430919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:01:33.935826  430919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:01:33.946054  430919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:01:33.946115  430919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:01:33.959850  430919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:01:33.973662  430919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:01:33.973743  430919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:01:33.988113  430919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:01:33.998813  430919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:01:33.998899  430919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:01:34.009922  430919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:01:34.147385  430919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:01:34.147453  430919 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:01:34.307417  430919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:01:34.307543  430919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:01:34.307675  430919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:01:34.507987  430919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:01:34.511062  430919 out.go:235]   - Generating certificates and keys ...
	I1007 13:01:34.511192  430919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:01:34.511287  430919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:01:34.707512  430919 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 13:01:34.811110  430919 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 13:01:34.983969  430919 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 13:01:35.103144  430919 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 13:01:35.237872  430919 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 13:01:35.238075  430919 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-415734 localhost] and IPs [192.168.50.141 127.0.0.1 ::1]
	I1007 13:01:35.369245  430919 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 13:01:35.369559  430919 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-415734 localhost] and IPs [192.168.50.141 127.0.0.1 ::1]
	I1007 13:01:35.717164  430919 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 13:01:35.925196  430919 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 13:01:36.198696  430919 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 13:01:36.202655  430919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:01:36.317993  430919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:01:36.518768  430919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:01:36.802171  430919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:01:36.913719  430919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:01:36.932628  430919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:01:36.934750  430919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:01:36.934904  430919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:01:37.093790  430919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:01:37.096020  430919 out.go:235]   - Booting up control plane ...
	I1007 13:01:37.096185  430919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:01:37.103614  430919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:01:37.105206  430919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:01:37.113644  430919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:01:37.124405  430919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:02:17.118538  430919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:02:17.119846  430919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:02:17.120070  430919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:02:22.120114  430919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:02:22.120313  430919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:02:32.119984  430919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:02:32.120287  430919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:02:52.120295  430919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:02:52.120547  430919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:03:32.122109  430919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:03:32.122400  430919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:03:32.122444  430919 kubeadm.go:310] 
	I1007 13:03:32.122498  430919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:03:32.122557  430919 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:03:32.122564  430919 kubeadm.go:310] 
	I1007 13:03:32.122610  430919 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:03:32.122676  430919 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:03:32.122836  430919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:03:32.122855  430919 kubeadm.go:310] 
	I1007 13:03:32.123014  430919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:03:32.123059  430919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:03:32.123123  430919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:03:32.123143  430919 kubeadm.go:310] 
	I1007 13:03:32.123294  430919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:03:32.123412  430919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:03:32.123438  430919 kubeadm.go:310] 
	I1007 13:03:32.123608  430919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:03:32.123748  430919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:03:32.123874  430919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:03:32.123975  430919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:03:32.123985  430919 kubeadm.go:310] 
	I1007 13:03:32.125325  430919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:03:32.125453  430919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:03:32.125546  430919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1007 13:03:32.125809  430919 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-415734 localhost] and IPs [192.168.50.141 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-415734 localhost] and IPs [192.168.50.141 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-415734 localhost] and IPs [192.168.50.141 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-415734 localhost] and IPs [192.168.50.141 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 13:03:32.125876  430919 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:03:34.359901  430919 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.233988187s)
	I1007 13:03:34.360011  430919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:03:34.375086  430919 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:03:34.389547  430919 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:03:34.389578  430919 kubeadm.go:157] found existing configuration files:
	
	I1007 13:03:34.389632  430919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:03:34.402597  430919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:03:34.402659  430919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:03:34.416102  430919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:03:34.426269  430919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:03:34.426337  430919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:03:34.436738  430919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:03:34.446786  430919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:03:34.446860  430919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:03:34.458285  430919 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:03:34.470394  430919 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:03:34.470458  430919 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:03:34.483817  430919 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:03:34.558723  430919 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:03:34.558879  430919 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:03:34.752160  430919 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:03:34.752299  430919 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:03:34.752466  430919 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:03:34.982689  430919 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:03:34.985276  430919 out.go:235]   - Generating certificates and keys ...
	I1007 13:03:34.985378  430919 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:03:34.985451  430919 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:03:34.985553  430919 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:03:34.985641  430919 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:03:34.985736  430919 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:03:34.985801  430919 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:03:34.985879  430919 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:03:34.986534  430919 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:03:34.987741  430919 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:03:34.989196  430919 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:03:34.989543  430919 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:03:34.989620  430919 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:03:35.220117  430919 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:03:35.336737  430919 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:03:35.438559  430919 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:03:35.611421  430919 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:03:35.630283  430919 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:03:35.631913  430919 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:03:35.631988  430919 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:03:35.804620  430919 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:03:35.806495  430919 out.go:235]   - Booting up control plane ...
	I1007 13:03:35.806660  430919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:03:35.812971  430919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:03:35.813970  430919 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:03:35.817256  430919 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:03:35.820518  430919 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:04:15.823505  430919 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1007 13:04:15.824441  430919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:04:15.824778  430919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:04:20.825083  430919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:04:20.825341  430919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:04:30.825714  430919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:04:30.825938  430919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:04:50.825322  430919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:04:50.825576  430919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:05:30.825069  430919 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:05:30.825316  430919 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:05:30.825352  430919 kubeadm.go:310] 
	I1007 13:05:30.825440  430919 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:05:30.825502  430919 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:05:30.825513  430919 kubeadm.go:310] 
	I1007 13:05:30.825562  430919 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:05:30.825616  430919 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:05:30.825786  430919 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:05:30.825809  430919 kubeadm.go:310] 
	I1007 13:05:30.825964  430919 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:05:30.826017  430919 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:05:30.826070  430919 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:05:30.826081  430919 kubeadm.go:310] 
	I1007 13:05:30.826226  430919 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:05:30.826351  430919 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:05:30.826364  430919 kubeadm.go:310] 
	I1007 13:05:30.826516  430919 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:05:30.826632  430919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:05:30.826709  430919 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:05:30.826795  430919 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:05:30.826803  430919 kubeadm.go:310] 
	I1007 13:05:30.827577  430919 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:05:30.827676  430919 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:05:30.827771  430919 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1007 13:05:30.827857  430919 kubeadm.go:394] duration metric: took 3m57.02276786s to StartCluster
	I1007 13:05:30.827916  430919 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:05:30.827990  430919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:05:30.881243  430919 cri.go:89] found id: ""
	I1007 13:05:30.881293  430919 logs.go:282] 0 containers: []
	W1007 13:05:30.881306  430919 logs.go:284] No container was found matching "kube-apiserver"
	I1007 13:05:30.881314  430919 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1007 13:05:30.881383  430919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:05:30.928532  430919 cri.go:89] found id: ""
	I1007 13:05:30.928569  430919 logs.go:282] 0 containers: []
	W1007 13:05:30.928579  430919 logs.go:284] No container was found matching "etcd"
	I1007 13:05:30.928586  430919 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1007 13:05:30.928641  430919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:05:30.972457  430919 cri.go:89] found id: ""
	I1007 13:05:30.972487  430919 logs.go:282] 0 containers: []
	W1007 13:05:30.972503  430919 logs.go:284] No container was found matching "coredns"
	I1007 13:05:30.972511  430919 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:05:30.972571  430919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:05:31.020351  430919 cri.go:89] found id: ""
	I1007 13:05:31.020385  430919 logs.go:282] 0 containers: []
	W1007 13:05:31.020398  430919 logs.go:284] No container was found matching "kube-scheduler"
	I1007 13:05:31.020407  430919 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:05:31.020478  430919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:05:31.064372  430919 cri.go:89] found id: ""
	I1007 13:05:31.064409  430919 logs.go:282] 0 containers: []
	W1007 13:05:31.064430  430919 logs.go:284] No container was found matching "kube-proxy"
	I1007 13:05:31.064439  430919 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:05:31.064510  430919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:05:31.107878  430919 cri.go:89] found id: ""
	I1007 13:05:31.107916  430919 logs.go:282] 0 containers: []
	W1007 13:05:31.107930  430919 logs.go:284] No container was found matching "kube-controller-manager"
	I1007 13:05:31.107940  430919 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1007 13:05:31.108015  430919 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:05:31.151981  430919 cri.go:89] found id: ""
	I1007 13:05:31.152012  430919 logs.go:282] 0 containers: []
	W1007 13:05:31.152021  430919 logs.go:284] No container was found matching "kindnet"
	I1007 13:05:31.152032  430919 logs.go:123] Gathering logs for container status ...
	I1007 13:05:31.152053  430919 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:05:31.199172  430919 logs.go:123] Gathering logs for kubelet ...
	I1007 13:05:31.199210  430919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:05:31.252647  430919 logs.go:123] Gathering logs for dmesg ...
	I1007 13:05:31.252693  430919 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:05:31.267898  430919 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:05:31.267929  430919 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1007 13:05:31.422651  430919 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1007 13:05:31.422678  430919 logs.go:123] Gathering logs for CRI-O ...
	I1007 13:05:31.422696  430919 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1007 13:05:31.531775  430919 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1007 13:05:31.531892  430919 out.go:270] * 
	* 
	W1007 13:05:31.531971  430919 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:05:31.531989  430919 out.go:270] * 
	* 
	W1007 13:05:31.532798  430919 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:05:31.536178  430919 out.go:201] 
	W1007 13:05:31.537464  430919 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1007 13:05:31.537499  430919 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1007 13:05:31.537524  430919 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1007 13:05:31.539001  430919 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-415734 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-415734
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-415734: (1.476808151s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-415734 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-415734 status --format={{.Host}}: exit status 7 (78.210688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-415734 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-415734 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.729953906s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-415734 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-415734 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-415734 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (91.016271ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-415734] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-415734
	    minikube start -p kubernetes-upgrade-415734 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4157342 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-415734 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-415734 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1007 13:06:42.462886  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-415734 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.162185627s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-07 13:07:13.200569955 +0000 UTC m=+5751.819153961
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-415734 -n kubernetes-upgrade-415734
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-415734 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-415734 logs -n 25: (1.865167343s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-226737                                | NoKubernetes-226737       | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC | 07 Oct 24 13:02 UTC |
	|         | --no-kubernetes --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-971127 ssh cat                     | force-systemd-flag-971127 | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC | 07 Oct 24 13:02 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf                    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-971127                          | force-systemd-flag-971127 | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC | 07 Oct 24 13:02 UTC |
	| start   | -p cert-options-831789                                | cert-options-831789       | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC | 07 Oct 24 13:03 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-226737 sudo                           | NoKubernetes-226737       | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-226737                                | NoKubernetes-226737       | jenkins | v1.34.0 | 07 Oct 24 13:02 UTC | 07 Oct 24 13:02 UTC |
	| start   | -p NoKubernetes-226737                                | NoKubernetes-226737       | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | cert-options-831789 ssh                               | cert-options-831789       | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-831789 -- sudo                        | cert-options-831789       | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-831789                                | cert-options-831789       | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	| start   | -p old-k8s-version-646622                             | old-k8s-version-646622    | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-226737 sudo                           | NoKubernetes-226737       | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC |                     |
	|         | systemctl is-active --quiet                           |                           |         |         |                     |                     |
	|         | service kubelet                                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-226737                                | NoKubernetes-226737       | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:03 UTC |
	| start   | -p no-preload-313579                                  | no-preload-313579         | jenkins | v1.34.0 | 07 Oct 24 13:03 UTC | 07 Oct 24 13:05 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	| start   | -p cert-expiration-926690                             | cert-expiration-926690    | jenkins | v1.34.0 | 07 Oct 24 13:05 UTC | 07 Oct 24 13:05 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-415734                          | kubernetes-upgrade-415734 | jenkins | v1.34.0 | 07 Oct 24 13:05 UTC | 07 Oct 24 13:05 UTC |
	| start   | -p kubernetes-upgrade-415734                          | kubernetes-upgrade-415734 | jenkins | v1.34.0 | 07 Oct 24 13:05 UTC | 07 Oct 24 13:06 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-313579            | no-preload-313579         | jenkins | v1.34.0 | 07 Oct 24 13:05 UTC | 07 Oct 24 13:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-313579                                  | no-preload-313579         | jenkins | v1.34.0 | 07 Oct 24 13:05 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-926690                             | cert-expiration-926690    | jenkins | v1.34.0 | 07 Oct 24 13:05 UTC | 07 Oct 24 13:05 UTC |
	| start   | -p embed-certs-581312                                 | embed-certs-581312        | jenkins | v1.34.0 | 07 Oct 24 13:05 UTC | 07 Oct 24 13:06 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-415734                          | kubernetes-upgrade-415734 | jenkins | v1.34.0 | 07 Oct 24 13:06 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-415734                          | kubernetes-upgrade-415734 | jenkins | v1.34.0 | 07 Oct 24 13:06 UTC | 07 Oct 24 13:07 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-581312           | embed-certs-581312        | jenkins | v1.34.0 | 07 Oct 24 13:07 UTC | 07 Oct 24 13:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p embed-certs-581312                                 | embed-certs-581312        | jenkins | v1.34.0 | 07 Oct 24 13:07 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:06:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:06:13.085601  435362 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:06:13.085712  435362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:06:13.085716  435362 out.go:358] Setting ErrFile to fd 2...
	I1007 13:06:13.085720  435362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:06:13.085912  435362 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 13:06:13.086481  435362 out.go:352] Setting JSON to false
	I1007 13:06:13.087633  435362 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10119,"bootTime":1728296254,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 13:06:13.087757  435362 start.go:139] virtualization: kvm guest
	I1007 13:06:13.089943  435362 out.go:177] * [kubernetes-upgrade-415734] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 13:06:13.091558  435362 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 13:06:13.091608  435362 notify.go:220] Checking for updates...
	I1007 13:06:13.094811  435362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:06:13.096338  435362 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 13:06:13.097886  435362 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 13:06:13.099298  435362 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 13:06:13.100733  435362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:06:13.102745  435362 config.go:182] Loaded profile config "kubernetes-upgrade-415734": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:06:13.103196  435362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:06:13.103280  435362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:06:13.118999  435362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35525
	I1007 13:06:13.119553  435362 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:06:13.120163  435362 main.go:141] libmachine: Using API Version  1
	I1007 13:06:13.120185  435362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:06:13.120531  435362 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:06:13.120699  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:06:13.120945  435362 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:06:13.121251  435362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:06:13.121296  435362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:06:13.137846  435362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39629
	I1007 13:06:13.138406  435362 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:06:13.139027  435362 main.go:141] libmachine: Using API Version  1
	I1007 13:06:13.139063  435362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:06:13.139486  435362 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:06:13.139714  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:06:13.178127  435362 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 13:06:13.179473  435362 start.go:297] selected driver: kvm2
	I1007 13:06:13.179499  435362 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-415734 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-415734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:06:13.179600  435362 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:06:13.180259  435362 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:06:13.180355  435362 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 13:06:13.196730  435362 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 13:06:13.197151  435362 cni.go:84] Creating CNI manager for ""
	I1007 13:06:13.197205  435362 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:06:13.197254  435362 start.go:340] cluster config:
	{Name:kubernetes-upgrade-415734 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-415734 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:06:13.197365  435362 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:06:13.199890  435362 out.go:177] * Starting "kubernetes-upgrade-415734" primary control-plane node in "kubernetes-upgrade-415734" cluster
	I1007 13:06:09.032988  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:09.033571  435085 main.go:141] libmachine: (embed-certs-581312) DBG | unable to find current IP address of domain embed-certs-581312 in network mk-embed-certs-581312
	I1007 13:06:09.033597  435085 main.go:141] libmachine: (embed-certs-581312) DBG | I1007 13:06:09.033508  435108 retry.go:31] will retry after 2.454790397s: waiting for machine to come up
	I1007 13:06:11.490933  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:11.491610  435085 main.go:141] libmachine: (embed-certs-581312) DBG | unable to find current IP address of domain embed-certs-581312 in network mk-embed-certs-581312
	I1007 13:06:11.491640  435085 main.go:141] libmachine: (embed-certs-581312) DBG | I1007 13:06:11.491561  435108 retry.go:31] will retry after 2.326109586s: waiting for machine to come up
	I1007 13:06:12.775336  433662 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1007 13:06:12.775626  433662 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1007 13:06:12.775663  433662 kubeadm.go:310] 
	I1007 13:06:12.775764  433662 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1007 13:06:12.775845  433662 kubeadm.go:310] 		timed out waiting for the condition
	I1007 13:06:12.775858  433662 kubeadm.go:310] 
	I1007 13:06:12.775907  433662 kubeadm.go:310] 	This error is likely caused by:
	I1007 13:06:12.775962  433662 kubeadm.go:310] 		- The kubelet is not running
	I1007 13:06:12.776116  433662 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1007 13:06:12.776135  433662 kubeadm.go:310] 
	I1007 13:06:12.776271  433662 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1007 13:06:12.776319  433662 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1007 13:06:12.776383  433662 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1007 13:06:12.776422  433662 kubeadm.go:310] 
	I1007 13:06:12.776561  433662 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1007 13:06:12.776644  433662 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1007 13:06:12.776652  433662 kubeadm.go:310] 
	I1007 13:06:12.776735  433662 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1007 13:06:12.776854  433662 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1007 13:06:12.776961  433662 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1007 13:06:12.777056  433662 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1007 13:06:12.777067  433662 kubeadm.go:310] 
	I1007 13:06:12.778015  433662 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:06:12.778190  433662 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1007 13:06:12.778286  433662 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1007 13:06:12.778506  433662 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-646622] and IPs [192.168.72.124 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-646622] and IPs [192.168.72.124 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1007 13:06:12.778557  433662 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1007 13:06:13.270707  433662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:06:13.286506  433662 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:06:13.297612  433662 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:06:13.297637  433662 kubeadm.go:157] found existing configuration files:
	
	I1007 13:06:13.297696  433662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:06:13.308040  433662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:06:13.308109  433662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:06:13.318573  433662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:06:13.328491  433662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:06:13.328552  433662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:06:13.338946  433662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:06:13.349175  433662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:06:13.349244  433662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:06:13.360240  433662 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:06:13.370383  433662 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:06:13.370455  433662 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:06:13.381236  433662 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:06:13.459063  433662 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1007 13:06:13.459130  433662 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:06:13.606585  433662 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:06:13.606739  433662 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:06:13.606884  433662 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 13:06:13.799611  433662 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:06:13.802756  433662 out.go:235]   - Generating certificates and keys ...
	I1007 13:06:13.802979  433662 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:06:13.803191  433662 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:06:13.803486  433662 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 13:06:13.803944  433662 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 13:06:13.804049  433662 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 13:06:13.804121  433662 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 13:06:13.804202  433662 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 13:06:13.804304  433662 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 13:06:13.804433  433662 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 13:06:13.804578  433662 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 13:06:13.804676  433662 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 13:06:13.804799  433662 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:06:13.961765  433662 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:06:14.094311  433662 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:06:14.240645  433662 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:06:14.359089  433662 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:06:14.381342  433662 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:06:14.382524  433662 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:06:14.382593  433662 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:06:14.528001  433662 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:06:14.530174  433662 out.go:235]   - Booting up control plane ...
	I1007 13:06:14.530337  433662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:06:14.542247  433662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:06:14.543917  433662 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:06:14.544941  433662 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:06:14.549908  433662 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 13:06:13.201147  435362 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:06:13.201198  435362 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1007 13:06:13.201210  435362 cache.go:56] Caching tarball of preloaded images
	I1007 13:06:13.201299  435362 preload.go:172] Found /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1007 13:06:13.201311  435362 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1007 13:06:13.201409  435362 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/config.json ...
	I1007 13:06:13.201613  435362 start.go:360] acquireMachinesLock for kubernetes-upgrade-415734: {Name:mk3938302f308673f9bad9d1885814d6352aa838 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 13:06:13.819768  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:13.820324  435085 main.go:141] libmachine: (embed-certs-581312) DBG | unable to find current IP address of domain embed-certs-581312 in network mk-embed-certs-581312
	I1007 13:06:13.820354  435085 main.go:141] libmachine: (embed-certs-581312) DBG | I1007 13:06:13.820274  435108 retry.go:31] will retry after 3.597580375s: waiting for machine to come up
	I1007 13:06:17.422016  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:17.422598  435085 main.go:141] libmachine: (embed-certs-581312) DBG | unable to find current IP address of domain embed-certs-581312 in network mk-embed-certs-581312
	I1007 13:06:17.422629  435085 main.go:141] libmachine: (embed-certs-581312) DBG | I1007 13:06:17.422539  435108 retry.go:31] will retry after 4.523273663s: waiting for machine to come up
	I1007 13:06:21.948126  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:21.948646  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has current primary IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:21.948669  435085 main.go:141] libmachine: (embed-certs-581312) Found IP for machine: 192.168.61.253
	I1007 13:06:21.948680  435085 main.go:141] libmachine: (embed-certs-581312) Reserving static IP address...
	I1007 13:06:21.949064  435085 main.go:141] libmachine: (embed-certs-581312) DBG | unable to find host DHCP lease matching {name: "embed-certs-581312", mac: "52:54:00:59:9f:db", ip: "192.168.61.253"} in network mk-embed-certs-581312
	I1007 13:06:22.036314  435085 main.go:141] libmachine: (embed-certs-581312) DBG | Getting to WaitForSSH function...
	I1007 13:06:22.036365  435085 main.go:141] libmachine: (embed-certs-581312) Reserved static IP address: 192.168.61.253
	I1007 13:06:22.036381  435085 main.go:141] libmachine: (embed-certs-581312) Waiting for SSH to be available...
	I1007 13:06:22.039658  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.040076  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:22.040105  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.040304  435085 main.go:141] libmachine: (embed-certs-581312) DBG | Using SSH client type: external
	I1007 13:06:22.040333  435085 main.go:141] libmachine: (embed-certs-581312) DBG | Using SSH private key: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/embed-certs-581312/id_rsa (-rw-------)
	I1007 13:06:22.040363  435085 main.go:141] libmachine: (embed-certs-581312) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19763-377026/.minikube/machines/embed-certs-581312/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1007 13:06:22.040382  435085 main.go:141] libmachine: (embed-certs-581312) DBG | About to run SSH command:
	I1007 13:06:22.040430  435085 main.go:141] libmachine: (embed-certs-581312) DBG | exit 0
	I1007 13:06:22.171549  435085 main.go:141] libmachine: (embed-certs-581312) DBG | SSH cmd err, output: <nil>: 
	I1007 13:06:22.171935  435085 main.go:141] libmachine: (embed-certs-581312) KVM machine creation complete!
	I1007 13:06:22.172238  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetConfigRaw
	I1007 13:06:22.172894  435085 main.go:141] libmachine: (embed-certs-581312) Calling .DriverName
	I1007 13:06:22.173127  435085 main.go:141] libmachine: (embed-certs-581312) Calling .DriverName
	I1007 13:06:22.173322  435085 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1007 13:06:22.173340  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetState
	I1007 13:06:22.174771  435085 main.go:141] libmachine: Detecting operating system of created instance...
	I1007 13:06:22.174790  435085 main.go:141] libmachine: Waiting for SSH to be available...
	I1007 13:06:22.174798  435085 main.go:141] libmachine: Getting to WaitForSSH function...
	I1007 13:06:22.174807  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:22.177569  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.178047  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:22.178068  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.178306  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:22.178514  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:22.178698  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:22.178840  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:22.179034  435085 main.go:141] libmachine: Using SSH client type: native
	I1007 13:06:22.179282  435085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.253 22 <nil> <nil>}
	I1007 13:06:22.179297  435085 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1007 13:06:22.290642  435085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:06:22.290675  435085 main.go:141] libmachine: Detecting the provisioner...
	I1007 13:06:22.290687  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:22.293993  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.294331  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:22.294364  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.294622  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:22.294844  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:22.295020  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:22.295174  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:22.295401  435085 main.go:141] libmachine: Using SSH client type: native
	I1007 13:06:22.295662  435085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.253 22 <nil> <nil>}
	I1007 13:06:22.295682  435085 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1007 13:06:22.404268  435085 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1007 13:06:22.404385  435085 main.go:141] libmachine: found compatible host: buildroot
	I1007 13:06:22.404402  435085 main.go:141] libmachine: Provisioning with buildroot...
	I1007 13:06:22.404412  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetMachineName
	I1007 13:06:22.404714  435085 buildroot.go:166] provisioning hostname "embed-certs-581312"
	I1007 13:06:22.404748  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetMachineName
	I1007 13:06:22.404983  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:22.408410  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.408848  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:22.408880  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.409119  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:22.409354  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:22.409556  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:22.409722  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:22.409932  435085 main.go:141] libmachine: Using SSH client type: native
	I1007 13:06:22.410164  435085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.253 22 <nil> <nil>}
	I1007 13:06:22.410179  435085 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-581312 && echo "embed-certs-581312" | sudo tee /etc/hostname
	I1007 13:06:22.534888  435085 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-581312
	
	I1007 13:06:22.534923  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:22.538152  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.538559  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:22.538613  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.538785  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:22.539029  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:22.539224  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:22.539352  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:22.539522  435085 main.go:141] libmachine: Using SSH client type: native
	I1007 13:06:22.539737  435085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.253 22 <nil> <nil>}
	I1007 13:06:22.539759  435085 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-581312' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-581312/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-581312' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:06:22.658261  435085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:06:22.658309  435085 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 13:06:22.658382  435085 buildroot.go:174] setting up certificates
	I1007 13:06:22.658396  435085 provision.go:84] configureAuth start
	I1007 13:06:22.658415  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetMachineName
	I1007 13:06:22.658765  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetIP
	I1007 13:06:22.661932  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.662278  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:22.662307  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.662452  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:22.665427  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.665957  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:22.665987  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.666216  435085 provision.go:143] copyHostCerts
	I1007 13:06:22.666291  435085 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 13:06:22.666318  435085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 13:06:22.666391  435085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 13:06:22.666540  435085 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 13:06:22.666556  435085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 13:06:22.666591  435085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 13:06:22.666720  435085 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 13:06:22.666733  435085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 13:06:22.666763  435085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 13:06:22.666833  435085 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.embed-certs-581312 san=[127.0.0.1 192.168.61.253 embed-certs-581312 localhost minikube]
	I1007 13:06:22.738261  435085 provision.go:177] copyRemoteCerts
	I1007 13:06:22.738325  435085 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:06:22.738352  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:22.741183  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.741615  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:22.741648  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.741845  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:22.742083  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:22.742240  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:22.742392  435085 sshutil.go:53] new ssh client: &{IP:192.168.61.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/embed-certs-581312/id_rsa Username:docker}
	I1007 13:06:22.826852  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:06:22.859187  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1007 13:06:22.895684  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 13:06:22.934416  435085 provision.go:87] duration metric: took 275.998121ms to configureAuth
	I1007 13:06:22.934449  435085 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:06:22.934682  435085 config.go:182] Loaded profile config "embed-certs-581312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:06:22.934776  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:22.938127  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.938545  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:22.938604  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:22.938794  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:22.938991  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:22.939186  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:22.939354  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:22.939593  435085 main.go:141] libmachine: Using SSH client type: native
	I1007 13:06:22.939837  435085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.253 22 <nil> <nil>}
	I1007 13:06:22.939871  435085 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:06:23.177399  435085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:06:23.177429  435085 main.go:141] libmachine: Checking connection to Docker...
	I1007 13:06:23.177438  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetURL
	I1007 13:06:23.178776  435085 main.go:141] libmachine: (embed-certs-581312) DBG | Using libvirt version 6000000
	I1007 13:06:23.180976  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.181347  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:23.181378  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.181521  435085 main.go:141] libmachine: Docker is up and running!
	I1007 13:06:23.181537  435085 main.go:141] libmachine: Reticulating splines...
	I1007 13:06:23.181543  435085 client.go:171] duration metric: took 24.827591418s to LocalClient.Create
	I1007 13:06:23.181570  435085 start.go:167] duration metric: took 24.827656937s to libmachine.API.Create "embed-certs-581312"
	I1007 13:06:23.181585  435085 start.go:293] postStartSetup for "embed-certs-581312" (driver="kvm2")
	I1007 13:06:23.181601  435085 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:06:23.181626  435085 main.go:141] libmachine: (embed-certs-581312) Calling .DriverName
	I1007 13:06:23.181860  435085 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:06:23.181885  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:23.184184  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.184466  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:23.184502  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.184630  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:23.184797  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:23.184934  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:23.185034  435085 sshutil.go:53] new ssh client: &{IP:192.168.61.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/embed-certs-581312/id_rsa Username:docker}
	I1007 13:06:23.428078  435362 start.go:364] duration metric: took 10.226430987s to acquireMachinesLock for "kubernetes-upgrade-415734"
	I1007 13:06:23.428139  435362 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:06:23.428152  435362 fix.go:54] fixHost starting: 
	I1007 13:06:23.428606  435362 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:06:23.428656  435362 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:06:23.446480  435362 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45997
	I1007 13:06:23.447033  435362 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:06:23.447603  435362 main.go:141] libmachine: Using API Version  1
	I1007 13:06:23.447630  435362 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:06:23.447972  435362 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:06:23.448178  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:06:23.448317  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetState
	I1007 13:06:23.450038  435362 fix.go:112] recreateIfNeeded on kubernetes-upgrade-415734: state=Running err=<nil>
	W1007 13:06:23.450058  435362 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:06:23.453200  435362 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-415734" VM ...
	I1007 13:06:23.271656  435085 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:06:23.276067  435085 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:06:23.276091  435085 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 13:06:23.276162  435085 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 13:06:23.276252  435085 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 13:06:23.276361  435085 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:06:23.286460  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 13:06:23.312379  435085 start.go:296] duration metric: took 130.77415ms for postStartSetup
	I1007 13:06:23.312443  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetConfigRaw
	I1007 13:06:23.313070  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetIP
	I1007 13:06:23.315732  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.316162  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:23.316186  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.316473  435085 profile.go:143] Saving config to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/config.json ...
	I1007 13:06:23.316646  435085 start.go:128] duration metric: took 24.983706763s to createHost
	I1007 13:06:23.316668  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:23.319222  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.319608  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:23.319637  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.319799  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:23.319989  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:23.320147  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:23.320314  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:23.320467  435085 main.go:141] libmachine: Using SSH client type: native
	I1007 13:06:23.320691  435085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.61.253 22 <nil> <nil>}
	I1007 13:06:23.320705  435085 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:06:23.427876  435085 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728306383.393151399
	
	I1007 13:06:23.427907  435085 fix.go:216] guest clock: 1728306383.393151399
	I1007 13:06:23.427917  435085 fix.go:229] Guest: 2024-10-07 13:06:23.393151399 +0000 UTC Remote: 2024-10-07 13:06:23.316657033 +0000 UTC m=+25.116614246 (delta=76.494366ms)
	I1007 13:06:23.427943  435085 fix.go:200] guest clock delta is within tolerance: 76.494366ms
	I1007 13:06:23.427950  435085 start.go:83] releasing machines lock for "embed-certs-581312", held for 25.095140696s
	I1007 13:06:23.427983  435085 main.go:141] libmachine: (embed-certs-581312) Calling .DriverName
	I1007 13:06:23.428278  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetIP
	I1007 13:06:23.431020  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.431392  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:23.431424  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.431590  435085 main.go:141] libmachine: (embed-certs-581312) Calling .DriverName
	I1007 13:06:23.432135  435085 main.go:141] libmachine: (embed-certs-581312) Calling .DriverName
	I1007 13:06:23.432361  435085 main.go:141] libmachine: (embed-certs-581312) Calling .DriverName
	I1007 13:06:23.432507  435085 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:06:23.432557  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:23.432578  435085 ssh_runner.go:195] Run: cat /version.json
	I1007 13:06:23.432602  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:23.435499  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.436037  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.436828  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:23.436853  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:23.436865  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:23.436855  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.436911  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:23.436923  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:23.437059  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:23.437069  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:23.437229  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:23.437244  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:23.437406  435085 sshutil.go:53] new ssh client: &{IP:192.168.61.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/embed-certs-581312/id_rsa Username:docker}
	I1007 13:06:23.437405  435085 sshutil.go:53] new ssh client: &{IP:192.168.61.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/embed-certs-581312/id_rsa Username:docker}
	I1007 13:06:23.539489  435085 ssh_runner.go:195] Run: systemctl --version
	I1007 13:06:23.546325  435085 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:06:23.711173  435085 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:06:23.717629  435085 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:06:23.717700  435085 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:06:23.738096  435085 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 13:06:23.738125  435085 start.go:495] detecting cgroup driver to use...
	I1007 13:06:23.738191  435085 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:06:23.762465  435085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:06:23.780385  435085 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:06:23.780450  435085 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:06:23.797827  435085 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:06:23.815199  435085 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:06:23.938141  435085 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:06:24.097645  435085 docker.go:233] disabling docker service ...
	I1007 13:06:24.097709  435085 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:06:24.113592  435085 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:06:24.129462  435085 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:06:24.273149  435085 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:06:24.414291  435085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:06:24.431187  435085 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:06:24.454288  435085 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:06:24.454360  435085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:24.466899  435085 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:06:24.466990  435085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:24.479916  435085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:24.494459  435085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:24.506354  435085 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:06:24.520978  435085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:24.534012  435085 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:24.553039  435085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:24.564964  435085 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:06:24.576078  435085 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1007 13:06:24.576172  435085 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1007 13:06:24.592820  435085 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:06:24.606625  435085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:06:24.736753  435085 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:06:24.837373  435085 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:06:24.837467  435085 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:06:24.842556  435085 start.go:563] Will wait 60s for crictl version
	I1007 13:06:24.842640  435085 ssh_runner.go:195] Run: which crictl
	I1007 13:06:24.846636  435085 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:06:24.888558  435085 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:06:24.888663  435085 ssh_runner.go:195] Run: crio --version
	I1007 13:06:24.920531  435085 ssh_runner.go:195] Run: crio --version
	I1007 13:06:24.952307  435085 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:06:23.454611  435362 machine.go:93] provisionDockerMachine start ...
	I1007 13:06:23.454641  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:06:23.454939  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:06:23.457990  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:23.458494  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:23.458520  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:23.458696  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:06:23.458862  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:23.459037  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:23.459206  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:06:23.459416  435362 main.go:141] libmachine: Using SSH client type: native
	I1007 13:06:23.459692  435362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1007 13:06:23.459710  435362 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:06:23.568387  435362 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-415734
	
	I1007 13:06:23.568421  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetMachineName
	I1007 13:06:23.568669  435362 buildroot.go:166] provisioning hostname "kubernetes-upgrade-415734"
	I1007 13:06:23.568732  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetMachineName
	I1007 13:06:23.568950  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:06:23.572265  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:23.572707  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:23.572759  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:23.572988  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:06:23.573223  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:23.573422  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:23.573581  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:06:23.573800  435362 main.go:141] libmachine: Using SSH client type: native
	I1007 13:06:23.573991  435362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1007 13:06:23.574007  435362 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-415734 && echo "kubernetes-upgrade-415734" | sudo tee /etc/hostname
	I1007 13:06:23.706803  435362 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-415734
	
	I1007 13:06:23.706844  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:06:23.709923  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:23.710260  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:23.710300  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:23.710468  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:06:23.710681  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:23.710868  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:23.711030  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:06:23.711248  435362 main.go:141] libmachine: Using SSH client type: native
	I1007 13:06:23.711496  435362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1007 13:06:23.711522  435362 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-415734' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-415734/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-415734' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:06:23.844483  435362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:06:23.844530  435362 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19763-377026/.minikube CaCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19763-377026/.minikube}
	I1007 13:06:23.844584  435362 buildroot.go:174] setting up certificates
	I1007 13:06:23.844602  435362 provision.go:84] configureAuth start
	I1007 13:06:23.844623  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetMachineName
	I1007 13:06:23.845035  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetIP
	I1007 13:06:23.848197  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:23.848638  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:23.848671  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:23.848883  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:06:23.852008  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:23.852496  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:23.852527  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:23.852674  435362 provision.go:143] copyHostCerts
	I1007 13:06:23.852746  435362 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem, removing ...
	I1007 13:06:23.852767  435362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem
	I1007 13:06:23.852824  435362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/ca.pem (1082 bytes)
	I1007 13:06:23.852926  435362 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem, removing ...
	I1007 13:06:23.852934  435362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem
	I1007 13:06:23.852953  435362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/cert.pem (1123 bytes)
	I1007 13:06:23.853065  435362 exec_runner.go:144] found /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem, removing ...
	I1007 13:06:23.853075  435362 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem
	I1007 13:06:23.853094  435362 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19763-377026/.minikube/key.pem (1679 bytes)
	I1007 13:06:23.853170  435362 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-415734 san=[127.0.0.1 192.168.50.141 kubernetes-upgrade-415734 localhost minikube]
	I1007 13:06:24.282423  435362 provision.go:177] copyRemoteCerts
	I1007 13:06:24.282485  435362 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:06:24.282509  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:06:24.285254  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:24.285609  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:24.285644  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:24.285808  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:06:24.286044  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:24.286241  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:06:24.286420  435362 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/id_rsa Username:docker}
	I1007 13:06:24.370795  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:06:24.404228  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1007 13:06:24.436434  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:06:24.468330  435362 provision.go:87] duration metric: took 623.709743ms to configureAuth
	I1007 13:06:24.468361  435362 buildroot.go:189] setting minikube options for container-runtime
	I1007 13:06:24.468556  435362 config.go:182] Loaded profile config "kubernetes-upgrade-415734": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:06:24.468654  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:06:24.471537  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:24.471930  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:24.471963  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:24.472295  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:06:24.472518  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:24.472710  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:24.472849  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:06:24.473021  435362 main.go:141] libmachine: Using SSH client type: native
	I1007 13:06:24.473289  435362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1007 13:06:24.473317  435362 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1007 13:06:24.953706  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetIP
	I1007 13:06:24.956574  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:24.956877  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:24.956911  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:24.957086  435085 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1007 13:06:24.961898  435085 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:06:24.976083  435085 kubeadm.go:883] updating cluster {Name:embed-certs-581312 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:embed-certs-581312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:06:24.976245  435085 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:06:24.976299  435085 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:06:25.009914  435085 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1007 13:06:25.010003  435085 ssh_runner.go:195] Run: which lz4
	I1007 13:06:25.014358  435085 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 13:06:25.018920  435085 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 13:06:25.018991  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1007 13:06:26.492613  435085 crio.go:462] duration metric: took 1.478287543s to copy over tarball
	I1007 13:06:26.492709  435085 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 13:06:28.610447  435085 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.117701172s)
	I1007 13:06:28.610475  435085 crio.go:469] duration metric: took 2.117822282s to extract the tarball
	I1007 13:06:28.610483  435085 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 13:06:28.651552  435085 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:06:28.698195  435085 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:06:28.698224  435085 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:06:28.698233  435085 kubeadm.go:934] updating node { 192.168.61.253 8443 v1.31.1 crio true true} ...
	I1007 13:06:28.698342  435085 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-581312 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-581312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:06:28.698419  435085 ssh_runner.go:195] Run: crio config
	I1007 13:06:28.745777  435085 cni.go:84] Creating CNI manager for ""
	I1007 13:06:28.745804  435085 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:06:28.745815  435085 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:06:28.745843  435085 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.253 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-581312 NodeName:embed-certs-581312 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:06:28.746171  435085 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-581312"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:06:28.746272  435085 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:06:28.758717  435085 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:06:28.758798  435085 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:06:28.770752  435085 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1007 13:06:28.790490  435085 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:06:28.810311  435085 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1007 13:06:28.830055  435085 ssh_runner.go:195] Run: grep 192.168.61.253	control-plane.minikube.internal$ /etc/hosts
	I1007 13:06:28.834469  435085 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.253	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:06:28.849698  435085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:06:28.982798  435085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:06:29.001264  435085 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312 for IP: 192.168.61.253
	I1007 13:06:29.001296  435085 certs.go:194] generating shared ca certs ...
	I1007 13:06:29.001320  435085 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:29.001540  435085 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 13:06:29.001602  435085 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 13:06:29.001616  435085 certs.go:256] generating profile certs ...
	I1007 13:06:29.001694  435085 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/client.key
	I1007 13:06:29.001715  435085 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/client.crt with IP's: []
	I1007 13:06:29.110859  435085 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/client.crt ...
	I1007 13:06:29.110888  435085 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/client.crt: {Name:mk332dbb215ee2eed50165c511c106ec4a14afbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:29.111104  435085 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/client.key ...
	I1007 13:06:29.111119  435085 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/client.key: {Name:mkcad91b7278aa053859618d4bb68f418b6f0a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:29.111235  435085 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.key.b91821d9
	I1007 13:06:29.111252  435085 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.crt.b91821d9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.253]
	I1007 13:06:29.269214  435085 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.crt.b91821d9 ...
	I1007 13:06:29.269248  435085 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.crt.b91821d9: {Name:mkabcbfc243bb428f0f1fdd8d193de99474623f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:29.269439  435085 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.key.b91821d9 ...
	I1007 13:06:29.269456  435085 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.key.b91821d9: {Name:mk88655de16796ee5d368e5c094890bc5214aa4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:29.269562  435085 certs.go:381] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.crt.b91821d9 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.crt
	I1007 13:06:29.269673  435085 certs.go:385] copying /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.key.b91821d9 -> /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.key
	I1007 13:06:29.269733  435085 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/proxy-client.key
	I1007 13:06:29.269750  435085 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/proxy-client.crt with IP's: []
	I1007 13:06:29.426863  435085 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/proxy-client.crt ...
	I1007 13:06:29.426899  435085 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/proxy-client.crt: {Name:mkdf4a75101e14e614c1bb7d4fbedc1e6b77b122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:29.427112  435085 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/proxy-client.key ...
	I1007 13:06:29.427132  435085 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/proxy-client.key: {Name:mkc1ced97b9567d2e6c074e27e4709d0b01e94f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:29.427343  435085 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 13:06:29.427385  435085 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 13:06:29.427416  435085 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:06:29.427458  435085 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:06:29.427483  435085 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:06:29.427507  435085 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 13:06:29.427549  435085 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 13:06:29.428220  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:06:29.454955  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 13:06:29.480343  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:06:29.506322  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 13:06:29.530789  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1007 13:06:29.556119  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 13:06:29.581662  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:06:29.606395  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/embed-certs-581312/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:06:29.631993  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 13:06:29.658983  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 13:06:29.684885  435085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:06:29.710587  435085 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:06:29.729223  435085 ssh_runner.go:195] Run: openssl version
	I1007 13:06:29.735644  435085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 13:06:29.748200  435085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 13:06:29.753145  435085 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 13:06:29.753214  435085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 13:06:29.759536  435085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 13:06:29.772906  435085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 13:06:29.795040  435085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 13:06:29.800183  435085 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 13:06:29.800265  435085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 13:06:29.806709  435085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:06:29.818476  435085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:06:29.830735  435085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:29.835946  435085 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:29.836013  435085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:29.843295  435085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:06:29.855581  435085 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:06:29.860187  435085 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 13:06:29.860257  435085 kubeadm.go:392] StartCluster: {Name:embed-certs-581312 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:embed-certs-581312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:06:29.860367  435085 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:06:29.860436  435085 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:06:29.899899  435085 cri.go:89] found id: ""
	I1007 13:06:29.899980  435085 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:06:29.912695  435085 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 13:06:29.928482  435085 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 13:06:29.940880  435085 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 13:06:29.940907  435085 kubeadm.go:157] found existing configuration files:
	
	I1007 13:06:29.940954  435085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 13:06:29.951682  435085 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 13:06:29.951753  435085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 13:06:29.962877  435085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 13:06:29.974389  435085 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 13:06:29.974475  435085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 13:06:29.985792  435085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 13:06:29.996715  435085 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 13:06:29.996777  435085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 13:06:30.008566  435085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 13:06:30.019778  435085 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 13:06:30.019857  435085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 13:06:30.031255  435085 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 13:06:30.145046  435085 kubeadm.go:310] W1007 13:06:30.115946     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:06:30.146185  435085 kubeadm.go:310] W1007 13:06:30.117275     829 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 13:06:30.255413  435085 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 13:06:33.865147  435362 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1007 13:06:33.865194  435362 machine.go:96] duration metric: took 10.410563339s to provisionDockerMachine
	I1007 13:06:33.865223  435362 start.go:293] postStartSetup for "kubernetes-upgrade-415734" (driver="kvm2")
	I1007 13:06:33.865240  435362 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:06:33.865283  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:06:33.865648  435362 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:06:33.865684  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:06:33.868777  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:33.869384  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:33.869424  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:33.869686  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:06:33.869942  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:33.870142  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:06:33.870376  435362 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/id_rsa Username:docker}
	I1007 13:06:33.958432  435362 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:06:33.963706  435362 info.go:137] Remote host: Buildroot 2023.02.9
	I1007 13:06:33.963747  435362 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/addons for local assets ...
	I1007 13:06:33.963843  435362 filesync.go:126] Scanning /home/jenkins/minikube-integration/19763-377026/.minikube/files for local assets ...
	I1007 13:06:33.963983  435362 filesync.go:149] local asset: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem -> 3842712.pem in /etc/ssl/certs
	I1007 13:06:33.964152  435362 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:06:33.975265  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 13:06:34.003441  435362 start.go:296] duration metric: took 138.194197ms for postStartSetup
	I1007 13:06:34.003512  435362 fix.go:56] duration metric: took 10.575359873s for fixHost
	I1007 13:06:34.003605  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:06:34.006780  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:34.007293  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:34.007328  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:34.007599  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:06:34.007840  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:34.008108  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:34.008271  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:06:34.008445  435362 main.go:141] libmachine: Using SSH client type: native
	I1007 13:06:34.008620  435362 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864f00] 0x867be0 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1007 13:06:34.008631  435362 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 13:06:34.120049  435362 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728306394.111707085
	
	I1007 13:06:34.120082  435362 fix.go:216] guest clock: 1728306394.111707085
	I1007 13:06:34.120093  435362 fix.go:229] Guest: 2024-10-07 13:06:34.111707085 +0000 UTC Remote: 2024-10-07 13:06:34.003519054 +0000 UTC m=+20.959761846 (delta=108.188031ms)
	I1007 13:06:34.120153  435362 fix.go:200] guest clock delta is within tolerance: 108.188031ms
	I1007 13:06:34.120180  435362 start.go:83] releasing machines lock for "kubernetes-upgrade-415734", held for 10.692068165s
	I1007 13:06:34.120214  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:06:34.120528  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetIP
	I1007 13:06:34.123369  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:34.123800  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:34.123834  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:34.124030  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:06:34.124652  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:06:34.124844  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .DriverName
	I1007 13:06:34.124965  435362 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:06:34.125047  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:06:34.125090  435362 ssh_runner.go:195] Run: cat /version.json
	I1007 13:06:34.125120  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHHostname
	I1007 13:06:34.127993  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:34.128297  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:34.128376  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:34.128411  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:34.128637  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:06:34.128811  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:34.128884  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:34.128918  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:34.129080  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:06:34.129080  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHPort
	I1007 13:06:34.129251  435362 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/id_rsa Username:docker}
	I1007 13:06:34.129440  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHKeyPath
	I1007 13:06:34.129605  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetSSHUsername
	I1007 13:06:34.129772  435362 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/kubernetes-upgrade-415734/id_rsa Username:docker}
	I1007 13:06:34.237133  435362 ssh_runner.go:195] Run: systemctl --version
	I1007 13:06:34.243828  435362 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1007 13:06:34.409480  435362 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 13:06:34.416410  435362 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 13:06:34.416501  435362 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:06:34.427521  435362 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 13:06:34.427559  435362 start.go:495] detecting cgroup driver to use...
	I1007 13:06:34.427641  435362 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 13:06:34.450934  435362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 13:06:34.467835  435362 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:06:34.467917  435362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:06:34.484350  435362 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:06:34.499842  435362 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:06:34.653194  435362 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:06:34.796501  435362 docker.go:233] disabling docker service ...
	I1007 13:06:34.796593  435362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:06:34.814184  435362 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:06:34.829751  435362 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:06:34.975602  435362 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:06:35.119639  435362 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:06:35.134941  435362 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:06:35.158972  435362 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1007 13:06:35.159082  435362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:35.174871  435362 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1007 13:06:35.174975  435362 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:35.187703  435362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:35.199374  435362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:35.211602  435362 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:06:35.223767  435362 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:35.235177  435362 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:35.247897  435362 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1007 13:06:35.259304  435362 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:06:35.270512  435362 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:06:35.281555  435362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:06:35.423706  435362 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1007 13:06:40.284261  435085 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 13:06:40.284345  435085 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 13:06:40.284435  435085 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 13:06:40.284572  435085 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 13:06:40.284714  435085 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 13:06:40.284818  435085 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 13:06:40.286339  435085 out.go:235]   - Generating certificates and keys ...
	I1007 13:06:40.286428  435085 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 13:06:40.286508  435085 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 13:06:40.286636  435085 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 13:06:40.286702  435085 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 13:06:40.286781  435085 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 13:06:40.286843  435085 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 13:06:40.286911  435085 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 13:06:40.287118  435085 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-581312 localhost] and IPs [192.168.61.253 127.0.0.1 ::1]
	I1007 13:06:40.287220  435085 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 13:06:40.287400  435085 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-581312 localhost] and IPs [192.168.61.253 127.0.0.1 ::1]
	I1007 13:06:40.287506  435085 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 13:06:40.287565  435085 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 13:06:40.287614  435085 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 13:06:40.287701  435085 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 13:06:40.287779  435085 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 13:06:40.287853  435085 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 13:06:40.287930  435085 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 13:06:40.288033  435085 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 13:06:40.288103  435085 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 13:06:40.288212  435085 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 13:06:40.288277  435085 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 13:06:40.289987  435085 out.go:235]   - Booting up control plane ...
	I1007 13:06:40.290077  435085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 13:06:40.290166  435085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 13:06:40.290262  435085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 13:06:40.290419  435085 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 13:06:40.290544  435085 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 13:06:40.290596  435085 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 13:06:40.290772  435085 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 13:06:40.290918  435085 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 13:06:40.291026  435085 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 527.43431ms
	I1007 13:06:40.291127  435085 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 13:06:40.291211  435085 kubeadm.go:310] [api-check] The API server is healthy after 5.502789478s
	I1007 13:06:40.291369  435085 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 13:06:40.291566  435085 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 13:06:40.291622  435085 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 13:06:40.291846  435085 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-581312 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 13:06:40.291927  435085 kubeadm.go:310] [bootstrap-token] Using token: g7j8wc.tpyfwss63fu1w0lp
	I1007 13:06:40.293368  435085 out.go:235]   - Configuring RBAC rules ...
	I1007 13:06:40.293484  435085 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 13:06:40.293581  435085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 13:06:40.293770  435085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 13:06:40.293950  435085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 13:06:40.294067  435085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 13:06:40.294164  435085 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 13:06:40.294320  435085 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 13:06:40.294364  435085 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 13:06:40.294414  435085 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 13:06:40.294424  435085 kubeadm.go:310] 
	I1007 13:06:40.294493  435085 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 13:06:40.294517  435085 kubeadm.go:310] 
	I1007 13:06:40.294594  435085 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 13:06:40.294601  435085 kubeadm.go:310] 
	I1007 13:06:40.294622  435085 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 13:06:40.294685  435085 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 13:06:40.294737  435085 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 13:06:40.294747  435085 kubeadm.go:310] 
	I1007 13:06:40.294816  435085 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 13:06:40.294823  435085 kubeadm.go:310] 
	I1007 13:06:40.294879  435085 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 13:06:40.294889  435085 kubeadm.go:310] 
	I1007 13:06:40.294965  435085 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 13:06:40.295046  435085 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 13:06:40.295108  435085 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 13:06:40.295115  435085 kubeadm.go:310] 
	I1007 13:06:40.295202  435085 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 13:06:40.295289  435085 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 13:06:40.295298  435085 kubeadm.go:310] 
	I1007 13:06:40.295427  435085 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g7j8wc.tpyfwss63fu1w0lp \
	I1007 13:06:40.295540  435085 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 \
	I1007 13:06:40.295565  435085 kubeadm.go:310] 	--control-plane 
	I1007 13:06:40.295569  435085 kubeadm.go:310] 
	I1007 13:06:40.295658  435085 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 13:06:40.295667  435085 kubeadm.go:310] 
	I1007 13:06:40.295737  435085 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g7j8wc.tpyfwss63fu1w0lp \
	I1007 13:06:40.295834  435085 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2fd6348052a965f34940e5ba90174bfbd02270cfad7be225de56832e31ef38d5 
	I1007 13:06:40.295845  435085 cni.go:84] Creating CNI manager for ""
	I1007 13:06:40.295852  435085 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:06:40.297310  435085 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 13:06:41.289254  435362 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.865503166s)
	I1007 13:06:41.289288  435362 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1007 13:06:41.289337  435362 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1007 13:06:41.294675  435362 start.go:563] Will wait 60s for crictl version
	I1007 13:06:41.294762  435362 ssh_runner.go:195] Run: which crictl
	I1007 13:06:41.299158  435362 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:06:41.342582  435362 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1007 13:06:41.342700  435362 ssh_runner.go:195] Run: crio --version
	I1007 13:06:41.374240  435362 ssh_runner.go:195] Run: crio --version
	I1007 13:06:41.410352  435362 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1007 13:06:41.411834  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) Calling .GetIP
	I1007 13:06:41.414536  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:41.415001  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b1:08", ip: ""} in network mk-kubernetes-upgrade-415734: {Iface:virbr2 ExpiryTime:2024-10-07 14:05:45 +0000 UTC Type:0 Mac:52:54:00:52:b1:08 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:kubernetes-upgrade-415734 Clientid:01:52:54:00:52:b1:08}
	I1007 13:06:41.415034  435362 main.go:141] libmachine: (kubernetes-upgrade-415734) DBG | domain kubernetes-upgrade-415734 has defined IP address 192.168.50.141 and MAC address 52:54:00:52:b1:08 in network mk-kubernetes-upgrade-415734
	I1007 13:06:41.415273  435362 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1007 13:06:41.420196  435362 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-415734 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:kubernetes-upgrade-415734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:06:41.420338  435362 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1007 13:06:41.420413  435362 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:06:41.467796  435362 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:06:41.467822  435362 crio.go:433] Images already preloaded, skipping extraction
	I1007 13:06:41.467884  435362 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:06:41.506343  435362 crio.go:514] all images are preloaded for cri-o runtime.
	I1007 13:06:41.506372  435362 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:06:41.506384  435362 kubeadm.go:934] updating node { 192.168.50.141 8443 v1.31.1 crio true true} ...
	I1007 13:06:41.506511  435362 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-415734 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-415734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:06:41.506593  435362 ssh_runner.go:195] Run: crio config
	I1007 13:06:41.560881  435362 cni.go:84] Creating CNI manager for ""
	I1007 13:06:41.560905  435362 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 13:06:41.560915  435362 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:06:41.560936  435362 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.141 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-415734 NodeName:kubernetes-upgrade-415734 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:06:41.561075  435362 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-415734"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.141
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.141"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:06:41.561140  435362 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:06:41.573489  435362 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:06:41.573578  435362 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:06:41.584523  435362 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1007 13:06:41.603291  435362 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:06:41.622409  435362 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I1007 13:06:41.646524  435362 ssh_runner.go:195] Run: grep 192.168.50.141	control-plane.minikube.internal$ /etc/hosts
	I1007 13:06:41.652511  435362 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:06:41.815946  435362 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:06:41.939039  435362 certs.go:68] Setting up /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734 for IP: 192.168.50.141
	I1007 13:06:41.939075  435362 certs.go:194] generating shared ca certs ...
	I1007 13:06:41.939100  435362 certs.go:226] acquiring lock for ca certs: {Name:mkb33346dabd36d58a78d24f94757f647b7cda33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:41.939338  435362 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key
	I1007 13:06:41.939404  435362 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key
	I1007 13:06:41.939418  435362 certs.go:256] generating profile certs ...
	I1007 13:06:41.939547  435362 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/client.key
	I1007 13:06:41.939617  435362 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.key.5df95fcf
	I1007 13:06:41.939677  435362 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/proxy-client.key
	I1007 13:06:41.939834  435362 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem (1338 bytes)
	W1007 13:06:41.939896  435362 certs.go:480] ignoring /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271_empty.pem, impossibly tiny 0 bytes
	I1007 13:06:41.939911  435362 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 13:06:41.939949  435362 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:06:41.939996  435362 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:06:41.940028  435362 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/certs/key.pem (1679 bytes)
	I1007 13:06:41.940081  435362 certs.go:484] found cert: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem (1708 bytes)
	I1007 13:06:41.940983  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:06:42.054775  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 13:06:42.202438  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:06:42.357399  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 13:06:42.627993  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1007 13:06:42.752141  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:06:40.298783  435085 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 13:06:40.310507  435085 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 13:06:40.329488  435085 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 13:06:40.329604  435085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:06:40.329624  435085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-581312 minikube.k8s.io/updated_at=2024_10_07T13_06_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=55a088b4b31722f6a33d4d5d4ae6e59a42bb414b minikube.k8s.io/name=embed-certs-581312 minikube.k8s.io/primary=true
	I1007 13:06:40.364866  435085 ops.go:34] apiserver oom_adj: -16
	I1007 13:06:40.561168  435085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:06:41.061505  435085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:06:41.562222  435085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:06:42.061332  435085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:06:42.562187  435085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:06:43.061954  435085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:06:43.562180  435085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:06:44.061378  435085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:06:44.561937  435085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 13:06:44.754838  435085 kubeadm.go:1113] duration metric: took 4.425305988s to wait for elevateKubeSystemPrivileges
	I1007 13:06:44.754895  435085 kubeadm.go:394] duration metric: took 14.894642561s to StartCluster
	I1007 13:06:44.754931  435085 settings.go:142] acquiring lock: {Name:mk1ff033f29b570679652ae5ee30e0799b0658dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:44.755050  435085 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 13:06:44.757380  435085 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19763-377026/kubeconfig: {Name:mkb063dd9004b3380daebd5398a27c65eb7a9c7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:06:44.757757  435085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 13:06:44.757758  435085 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.253 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1007 13:06:44.757907  435085 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:06:44.758001  435085 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-581312"
	I1007 13:06:44.758026  435085 config.go:182] Loaded profile config "embed-certs-581312": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 13:06:44.758042  435085 addons.go:69] Setting default-storageclass=true in profile "embed-certs-581312"
	I1007 13:06:44.758067  435085 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-581312"
	I1007 13:06:44.758033  435085 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-581312"
	I1007 13:06:44.758125  435085 host.go:66] Checking if "embed-certs-581312" exists ...
	I1007 13:06:44.758606  435085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:06:44.758697  435085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:06:44.758606  435085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:06:44.758767  435085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:06:44.759654  435085 out.go:177] * Verifying Kubernetes components...
	I1007 13:06:44.761270  435085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:06:44.782928  435085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34183
	I1007 13:06:44.782987  435085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I1007 13:06:44.783532  435085 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:06:44.783683  435085 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:06:44.784227  435085 main.go:141] libmachine: Using API Version  1
	I1007 13:06:44.784253  435085 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:06:44.784415  435085 main.go:141] libmachine: Using API Version  1
	I1007 13:06:44.784433  435085 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:06:44.784718  435085 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:06:44.784787  435085 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:06:44.785362  435085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:06:44.785408  435085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:06:44.785683  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetState
	I1007 13:06:44.790162  435085 addons.go:234] Setting addon default-storageclass=true in "embed-certs-581312"
	I1007 13:06:44.790247  435085 host.go:66] Checking if "embed-certs-581312" exists ...
	I1007 13:06:44.790693  435085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:06:44.790739  435085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:06:44.808277  435085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45379
	I1007 13:06:44.809247  435085 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:06:44.810043  435085 main.go:141] libmachine: Using API Version  1
	I1007 13:06:44.810067  435085 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:06:44.810553  435085 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:06:44.810809  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetState
	I1007 13:06:44.813087  435085 main.go:141] libmachine: (embed-certs-581312) Calling .DriverName
	I1007 13:06:44.813710  435085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I1007 13:06:44.814235  435085 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:06:44.814893  435085 main.go:141] libmachine: Using API Version  1
	I1007 13:06:44.814914  435085 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:06:44.815440  435085 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:06:44.816132  435085 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 13:06:44.816204  435085 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 13:06:44.818278  435085 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:06:44.819965  435085 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:06:44.819994  435085 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:06:44.820029  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:44.827418  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:44.827906  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:44.827944  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:44.828166  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:44.828391  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:44.828496  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:44.828605  435085 sshutil.go:53] new ssh client: &{IP:192.168.61.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/embed-certs-581312/id_rsa Username:docker}
	I1007 13:06:44.838935  435085 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I1007 13:06:44.839797  435085 main.go:141] libmachine: () Calling .GetVersion
	I1007 13:06:44.840586  435085 main.go:141] libmachine: Using API Version  1
	I1007 13:06:44.840613  435085 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 13:06:44.841194  435085 main.go:141] libmachine: () Calling .GetMachineName
	I1007 13:06:44.841552  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetState
	I1007 13:06:44.843812  435085 main.go:141] libmachine: (embed-certs-581312) Calling .DriverName
	I1007 13:06:44.844127  435085 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:06:44.844146  435085 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:06:44.844170  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHHostname
	I1007 13:06:44.848410  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:44.848931  435085 main.go:141] libmachine: (embed-certs-581312) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:9f:db", ip: ""} in network mk-embed-certs-581312: {Iface:virbr3 ExpiryTime:2024-10-07 14:06:14 +0000 UTC Type:0 Mac:52:54:00:59:9f:db Iaid: IPaddr:192.168.61.253 Prefix:24 Hostname:embed-certs-581312 Clientid:01:52:54:00:59:9f:db}
	I1007 13:06:44.848959  435085 main.go:141] libmachine: (embed-certs-581312) DBG | domain embed-certs-581312 has defined IP address 192.168.61.253 and MAC address 52:54:00:59:9f:db in network mk-embed-certs-581312
	I1007 13:06:44.849352  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHPort
	I1007 13:06:44.849569  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHKeyPath
	I1007 13:06:44.849707  435085 main.go:141] libmachine: (embed-certs-581312) Calling .GetSSHUsername
	I1007 13:06:44.849830  435085 sshutil.go:53] new ssh client: &{IP:192.168.61.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/embed-certs-581312/id_rsa Username:docker}
	I1007 13:06:45.194128  435085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:06:45.264302  435085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:06:45.328952  435085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:06:45.329018  435085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1007 13:06:45.757136  435085 main.go:141] libmachine: Making call to close driver server
	I1007 13:06:45.757174  435085 main.go:141] libmachine: (embed-certs-581312) Calling .Close
	I1007 13:06:45.757535  435085 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:06:45.757552  435085 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:06:45.757561  435085 main.go:141] libmachine: Making call to close driver server
	I1007 13:06:45.757569  435085 main.go:141] libmachine: (embed-certs-581312) Calling .Close
	I1007 13:06:45.757824  435085 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:06:45.757845  435085 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:06:45.816477  435085 main.go:141] libmachine: Making call to close driver server
	I1007 13:06:45.816514  435085 main.go:141] libmachine: (embed-certs-581312) Calling .Close
	I1007 13:06:45.816904  435085 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:06:45.816925  435085 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:06:46.422744  435085 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.093668044s)
	I1007 13:06:46.422837  435085 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.158491702s)
	I1007 13:06:46.422846  435085 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1007 13:06:46.422779  435085 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.093783883s)
	I1007 13:06:46.422907  435085 main.go:141] libmachine: Making call to close driver server
	I1007 13:06:46.423127  435085 main.go:141] libmachine: (embed-certs-581312) Calling .Close
	I1007 13:06:46.424738  435085 node_ready.go:35] waiting up to 6m0s for node "embed-certs-581312" to be "Ready" ...
	I1007 13:06:46.425276  435085 main.go:141] libmachine: (embed-certs-581312) DBG | Closing plugin on server side
	I1007 13:06:46.425279  435085 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:06:46.425300  435085 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:06:46.425336  435085 main.go:141] libmachine: Making call to close driver server
	I1007 13:06:46.425349  435085 main.go:141] libmachine: (embed-certs-581312) Calling .Close
	I1007 13:06:46.425649  435085 main.go:141] libmachine: Successfully made call to close driver server
	I1007 13:06:46.425690  435085 main.go:141] libmachine: Making call to close connection to plugin binary
	I1007 13:06:46.425676  435085 main.go:141] libmachine: (embed-certs-581312) DBG | Closing plugin on server side
	I1007 13:06:46.427528  435085 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1007 13:06:43.112828  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:06:43.238059  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/kubernetes-upgrade-415734/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:06:43.302769  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:06:43.348837  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/certs/384271.pem --> /usr/share/ca-certificates/384271.pem (1338 bytes)
	I1007 13:06:43.396020  435362 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/ssl/certs/3842712.pem --> /usr/share/ca-certificates/3842712.pem (1708 bytes)
	I1007 13:06:43.433563  435362 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:06:43.458464  435362 ssh_runner.go:195] Run: openssl version
	I1007 13:06:43.465396  435362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:06:43.483979  435362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:43.491205  435362 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 11:32 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:43.491299  435362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:06:43.498332  435362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:06:43.511247  435362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/384271.pem && ln -fs /usr/share/ca-certificates/384271.pem /etc/ssl/certs/384271.pem"
	I1007 13:06:43.525572  435362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/384271.pem
	I1007 13:06:43.533537  435362 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 11:53 /usr/share/ca-certificates/384271.pem
	I1007 13:06:43.533629  435362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/384271.pem
	I1007 13:06:43.543941  435362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/384271.pem /etc/ssl/certs/51391683.0"
	I1007 13:06:43.562238  435362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3842712.pem && ln -fs /usr/share/ca-certificates/3842712.pem /etc/ssl/certs/3842712.pem"
	I1007 13:06:43.579503  435362 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3842712.pem
	I1007 13:06:43.584978  435362 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 11:53 /usr/share/ca-certificates/3842712.pem
	I1007 13:06:43.585069  435362 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3842712.pem
	I1007 13:06:43.596418  435362 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3842712.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:06:43.610066  435362 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:06:43.618263  435362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:06:43.628018  435362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:06:43.634510  435362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:06:43.642095  435362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:06:43.675140  435362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:06:43.736011  435362 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:06:43.754819  435362 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-415734 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.31.1 ClusterName:kubernetes-upgrade-415734 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:06:43.754983  435362 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1007 13:06:43.755064  435362 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:06:43.899135  435362 cri.go:89] found id: "a997ee80aad1f7a454af4e55f63868e2e252a1e387b1f6e2cd314271720e4450"
	I1007 13:06:43.899171  435362 cri.go:89] found id: "8f63e2a74a3ef60136ce294a2ba3c4825dcc435fbf5d7e03e26a13aa8bc9d3e8"
	I1007 13:06:43.899177  435362 cri.go:89] found id: "20b1b62f7620ef56810775f442bb624591037c7a7ab5f00f6a2b8a4fcb8fde3b"
	I1007 13:06:43.899182  435362 cri.go:89] found id: "f50909893d272beb40091068751f28117754deb05170ea9f895f699e82665fe1"
	I1007 13:06:43.899187  435362 cri.go:89] found id: "16c3bdbb28e413ba53e3ab8170b3e8074bfea3c1968b9ab71b77333a52bdb57e"
	I1007 13:06:43.899192  435362 cri.go:89] found id: "2f19a52e004700fac1744cc34843cc99f88d08f72f89b112a0e32762b29b1714"
	I1007 13:06:43.899196  435362 cri.go:89] found id: "036c44ff5e2a07744fb52ac2d0e62a72b9db8514a9ab5303c087463a6d813404"
	I1007 13:06:43.899201  435362 cri.go:89] found id: "a6217784bc314d3e32b9e9e347bd4499c7f20e78f26de45c653512977ab725d0"
	I1007 13:06:43.899206  435362 cri.go:89] found id: "a9e31294b3eb7ce5832f99a4488b3eb242724b8dd23f2019081a4b34c64e1822"
	I1007 13:06:43.899214  435362 cri.go:89] found id: "b4024452d003a132bc994e6c4144d5274322a999a1fb0554d7e46ad86af40e0c"
	I1007 13:06:43.899219  435362 cri.go:89] found id: "692f98c33794b709c57748c5349f2aa718f87dcc99e29bff26a54dbc6e04338f"
	I1007 13:06:43.899224  435362 cri.go:89] found id: "1839858b10775b0a229983e55cebd7d4d1540ccdd5c6fea010c004504c310c19"
	I1007 13:06:43.899230  435362 cri.go:89] found id: "ec04d26ac0b52a1ad5cbbf4f497b8c338267784fa48b0b15e602b9aff804eb23"
	I1007 13:06:43.899234  435362 cri.go:89] found id: ""
	I1007 13:06:43.899298  435362 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-415734 -n kubernetes-upgrade-415734
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-415734 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-415734" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-415734
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-415734: (1.136427043s)
--- FAIL: TestKubernetesUpgrade (384.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7200.065s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.124:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.124:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (30m50s)
		TestStartStop (32m0s)
		TestStartStop/group/default-k8s-diff-port (24m4s)
		TestStartStop/group/default-k8s-diff-port/serial (24m4s)
		TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (2m55s)
		TestStartStop/group/embed-certs (25m23s)
		TestStartStop/group/embed-certs/serial (25m23s)
		TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (4m12s)
		TestStartStop/group/no-preload (27m38s)
		TestStartStop/group/no-preload/serial (27m38s)
		TestStartStop/group/no-preload/serial/AddonExistsAfterStop (2m25s)
		TestStartStop/group/old-k8s-version (27m41s)
		TestStartStop/group/old-k8s-version/serial (27m41s)
		TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (20s)

                                                
                                                
goroutine 9031 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 25 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000026000, 0xc000827bc8)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
testing.runTests(0xc0009060d8, {0x51b7ac0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x411b30?, 0x52cfca0?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000047400)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000047400)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00043c880)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2328 [IO wait, 99 minutes]:
internal/poll.runtime_pollWait(0x7ff9b0b3ced8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000044400?, 0x2c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000044400)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc000044400)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0005531c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0005531c0)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc000b71e00, {0x3942de0, 0xc0005531c0})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc000b71e00)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0016009c0?, 0xc0016009c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 2325
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 5308 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0019ea4e0, {0x2c60db3?, 0xc001402d70?}, 0xc0023a4080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0019ea4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0019ea4e0, 0xc00195e080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4855
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4853 [chan receive, 33 minutes]:
testing.(*testContext).waitParallel(0xc00090f680)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000027860)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000027860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000027860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000027860, 0xc0014bcb40)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4851
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 140 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc0008a8000}, 0xc001402f50, 0xc00151bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc0008a8000}, 0x58?, 0xc001402f50, 0xc001402f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc0008a8000?}, 0xc0015104e0?, 0x559d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001402fd0?, 0x593fe4?, 0xc000b7a090?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 108
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 5048 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00090f680)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0019eb520)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019eb520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0019eb520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0019eb520, 0xc000117b00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4916
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 141 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 140
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 108 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c2f7c0, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 166
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 107 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 166
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 139 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000c2f790, 0x2c)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc001514d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c2f7c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009f62b0, {0x3916f20, 0xc0008aa030}, 0x1, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009f62b0, 0x3b9aca00, 0x0, 0x1, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 108
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 5357 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5356
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 5368 [chan receive]:
testing.(*T).Run(0xc001906b60, {0x2c60db3?, 0xc001406570?}, 0xc0023a4100)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001906b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001906b60, 0xc0023a4000)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4852
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 6195 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc0008a8000}, 0xc001ea7f50, 0xc001ea7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc0008a8000}, 0x0?, 0xc001ea7f50, 0xc001ea7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc0008a8000?}, 0x9e9a36?, 0xc0004fdb00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0004fdb00?, 0x593fe4?, 0xc0008a9340?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6189
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 5047 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00090f680)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0019eb380)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019eb380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0019eb380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0019eb380, 0xc000117a80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4916
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 2530 [chan receive, 95 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000553980, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2473
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2518 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc0008a8000}, 0xc00163cf50, 0xc0000ccf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc0008a8000}, 0xe0?, 0xc00163cf50, 0xc00163cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc0008a8000?}, 0x9e9a36?, 0xc001528180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593f85?, 0xc0009d1680?, 0xc00150f5e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2530
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 5473 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc0008a8000}, 0xc000afcf50, 0xc000afcf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc0008a8000}, 0xc0?, 0xc000afcf50, 0xc000afcf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc0008a8000?}, 0x9e9a36?, 0xc001651380?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000b7e0f0?, 0xc00195a160?, 0xc00050b7a8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5515
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 4916 [chan receive, 30 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc0019ea340, 0xc001e8c270)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 4673
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5355 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0009a31d0, 0x5)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0016ebd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009a3200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009f6070, {0x3916f20, 0xc00080c150}, 0x1, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009f6070, 0x3b9aca00, 0x0, 0x1, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5413
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 4938 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00090f680)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0019eab60)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019eab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0019eab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0019eab60, 0xc000117300)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4916
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5423 [chan receive, 4 minutes]:
testing.(*T).Run(0xc0019ea1a0, {0x2c60db3?, 0xc000af4570?}, 0xc0023a4080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0019ea1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0019ea1a0, 0xc00195f200)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4857
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4673 [chan receive, 31 minutes]:
testing.(*T).Run(0xc001600d00, {0x2c3cf87?, 0x5595bc?}, 0xc001e8c270)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001600d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc001600d00, 0x35da3f0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5554 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5473
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4940 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00090f680)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0019eb040)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019eb040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0019eb040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0019eb040, 0xc000117500)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4916
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5356 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x394fe70, 0xc0008a8000}, 0xc001ea3f50, 0xc001ea3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x394fe70, 0xc0008a8000}, 0x11?, 0xc001ea3f50, 0xc001ea3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x394fe70?, 0xc0008a8000?}, 0xc001906820?, 0x559d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001404fd0?, 0x593fe4?, 0xc0023a4100?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5413
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 7903 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394fae8, 0xc0008592c0}, {0x3943440, 0xc00090b700}, 0x1, 0x0, 0xc000091b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394fb58?, 0xc000496230?}, 0x3b9aca00, 0xc0013b5d38?, 0x1, 0xc0013b5b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394fb58, 0xc000496230}, 0xc001510680, {0xc0016b4180, 0x1c}, {0x2c60d4f, 0x14}, {0x2c76d8d, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x394fb58, 0xc000496230}, 0xc001510680, {0xc0016b4180, 0x1c}, {0x2c638ed?, 0xc00229ff60?}, {0x559473?, 0x4b186f?}, {0xc0008d6100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x125
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001510680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001510680, 0xc000044280)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 5484
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5798 [IO wait]:
internal/poll.runtime_pollWait(0x7ff9b0b3d610, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000044500?, 0xc00172e000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000044500, {0xc00172e000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc000044500, {0xc00172e000?, 0x10?, 0xc000afd8a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000a75a30, {0xc00172e000?, 0xc00172e05f?, 0x70?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc0019c6798, {0xc00172e000?, 0x0?, 0xc0019c6798?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0019202b8, {0x3917560, 0xc0019c6798})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001920008, {0x7ff9b0070fc8, 0xc001914ee8}, 0xc000afda10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001920008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc001920008, {0xc00174b000, 0x1000, 0xc001481500?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc0024e6d80, {0xc000778d60, 0x9, 0x5168880?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3915620, 0xc0024e6d80}, {0xc000778d60, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc000778d60, 0x9, 0x47b965?}, {0x3915620?, 0xc0024e6d80?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000778d20)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000afdfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001451b00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 5797
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 5725 [IO wait]:
internal/poll.runtime_pollWait(0x7ff9b0b3c9b0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0023a5000?, 0xc00165e800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0023a5000, {0xc00165e800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc0023a5000, {0xc00165e800?, 0x10?, 0xc0015cd8a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0016345d0, {0xc00165e800?, 0xc00165e85f?, 0x70?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc0019c6738, {0xc00165e800?, 0x0?, 0xc0019c6738?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0005af0b8, {0x3917560, 0xc0019c6738})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0005aee08, {0x7ff9b0070fc8, 0xc001d68978}, 0xc0015cda10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0005aee08, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0005aee08, {0xc0014ce000, 0x1000, 0xc001481500?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc001899da0, {0xc0019c23c0, 0x9, 0x5168880?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3915620, 0xc001899da0}, {0xc0019c23c0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0019c23c0, 0x9, 0x47b965?}, {0x3915620?, 0xc001899da0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0019c2380)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0015cdfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0013c7e00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 5724
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 6196 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6195
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2517 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000553950, 0x26)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0000d3d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000553980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008cdf40, {0x3916f20, 0xc0008aa630}, 0x1, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008cdf40, 0x3b9aca00, 0x0, 0x1, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2530
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 6189 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c2fdc0, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6187
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 6188 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6187
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 5515 [chan receive, 23 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00177c580, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5527
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4857 [chan receive, 25 minutes]:
testing.(*T).Run(0xc001906340, {0x2c3e385?, 0x0?}, 0xc00195f200)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001906340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001906340, 0xc0014bcc80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4851
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4851 [chan receive, 33 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x49b
testing.tRunner(0xc000026820, 0x35da630)
	/usr/local/go/src/testing/testing.go:1696 +0x12d
created by testing.(*T).Run in goroutine 4788
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5412 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5383
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 3213 [chan send, 92 minutes]:
os/exec.(*Cmd).watchCtx(0xc001466900, 0xc001fc5b20)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 3212
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 4788 [chan receive, 33 minutes]:
testing.(*T).Run(0xc001601380, {0x2c3cf87?, 0x559473?}, 0x35da630)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001601380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001601380, 0x35da438)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 6194 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000c2fd90, 0x1)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc0013d7d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c2fdc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00171a010, {0x3916f20, 0xc0015da000}, 0x1, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00171a010, 0x3b9aca00, 0x0, 0x1, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 6189
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3363 [select, 92 minutes]:
net/http.(*persistConn).readLoop(0xc00177b320)
	/usr/local/go/src/net/http/transport.go:2325 +0xca5
created by net/http.(*Transport).dialConn in goroutine 3345
	/usr/local/go/src/net/http/transport.go:1874 +0x154f

                                                
                                                
goroutine 2519 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2518
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3036 [chan send, 92 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015f3080, 0xc000635c70)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 2444
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 5472 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00177c550, 0x4)
	/usr/local/go/src/runtime/sema.go:587 +0x159
sync.(*Cond).Wait(0xc000829d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x396b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00177c580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b7e7e0, {0x3916f20, 0xc001e6a660}, 0x1, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b7e7e0, 0x3b9aca00, 0x0, 0x1, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 5515
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 3364 [select, 92 minutes]:
net/http.(*persistConn).writeLoop(0xc00177b320)
	/usr/local/go/src/net/http/transport.go:2519 +0xe7
created by net/http.(*Transport).dialConn in goroutine 3345
	/usr/local/go/src/net/http/transport.go:1875 +0x15a5

                                                
                                                
goroutine 2990 [chan send, 92 minutes]:
os/exec.(*Cmd).watchCtx(0xc0015f2480, 0xc000635500)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 2989
	/usr/local/go/src/os/exec/exec.go:759 +0x953

                                                
                                                
goroutine 4939 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00090f680)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0019ead00)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019ead00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0019ead00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0019ead00, 0xc000117480)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4916
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5514 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5527
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2513 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3946120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2473
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 8323 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394fae8, 0xc0008834a0}, {0x3943440, 0xc001786700}, 0x1, 0x0, 0xc001473b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394fb58?, 0xc0006dc4d0?}, 0x3b9aca00, 0xc001473d38?, 0x1, 0xc001473b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394fb58, 0xc0006dc4d0}, 0xc001510820, {0xc0015e0270, 0x11}, {0x2c60d4f, 0x14}, {0x2c76d8d, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x394fb58, 0xc0006dc4d0}, 0xc001510820, {0xc0015e0270, 0x11}, {0x2c4767d?, 0xc000507760?}, {0x559473?, 0x4b186f?}, {0xc00004cd00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x125
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001510820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001510820, 0xc0023a4080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 5308
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 8633 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394fae8, 0xc00090fdb0}, {0x3943440, 0xc00266a020}, 0x1, 0x0, 0xc001477b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394fb58?, 0xc000490e00?}, 0x3b9aca00, 0xc001477d38?, 0x1, 0xc001477b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394fb58, 0xc000490e00}, 0xc0015101a0, {0xc001b7c378, 0x16}, {0x2c60d4f, 0x14}, {0x2c76d8d, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x394fb58, 0xc000490e00}, 0xc0015101a0, {0xc001b7c378, 0x16}, {0x2c52c89?, 0xc0000b8f60?}, {0x559473?, 0x4b186f?}, {0xc001834900, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x125
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0015101a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0015101a0, 0xc0023a4100)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 5368
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5638 [IO wait]:
internal/poll.runtime_pollWait(0x7ff9b0b3d2f8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc000117f00?, 0xc00157c800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000117f00, {0xc00157c800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
net.(*netFD).Read(0xc000117f00, {0xc00157c800?, 0x9d7032?, 0xc000b039a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0008b8048, {0xc00157c800?, 0xc000c1e360?, 0xc00157c805?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc0019c6810, {0xc00157c800?, 0x0?, 0xc0019c6810?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc0005ae9b8, {0x3917560, 0xc0019c6810})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0005ae708, {0x3916a40, 0xc0008b8048}, 0xc000b03a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0005ae708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc0005ae708, {0xc0013ec000, 0x1000, 0xc001481500?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc00151ff20, {0xc00056eac0, 0x9, 0x5168880?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3915620, 0xc00151ff20}, {0xc00056eac0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00056eac0, 0x9, 0x47b965?}, {0x3915620?, 0xc00151ff20?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00056ea80)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000b03fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0009d0180)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:2250 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 5637
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.29.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 4854 [chan receive, 24 minutes]:
testing.(*T).Run(0xc000027a00, {0x2c3e385?, 0x0?}, 0xc00088c900)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000027a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc000027a00, 0xc0014bcb80)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4851
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 7748 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x394fae8, 0xc0008828c0}, {0x3943440, 0xc000c1e580}, 0x1, 0x0, 0xc0013b5b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x394fb58?, 0xc000641ea0?}, 0x3b9aca00, 0xc001489d38?, 0x1, 0xc001489b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x394fb58, 0xc000641ea0}, 0xc0015104e0, {0xc000897230, 0x12}, {0x2c60d4f, 0x14}, {0x2c76d8d, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x394fb58, 0xc000641ea0}, 0xc0015104e0, {0xc000897230, 0x12}, {0x2c49543?, 0xc001640760?}, {0x559473?, 0x4b186f?}, {0xc00004d700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x125
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0015104e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0015104e0, 0xc0023a4080)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 5423
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5484 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0006fa9c0, {0x2c60db3?, 0xc0000b8d70?}, 0xc000044280)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0006fa9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0006fa9c0, 0xc00088c900)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4854
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4855 [chan receive, 28 minutes]:
testing.(*T).Run(0xc001906000, {0x2c3e385?, 0x0?}, 0xc00195e080)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001906000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001906000, 0xc0014bcbc0)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4851
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4852 [chan receive, 28 minutes]:
testing.(*T).Run(0xc0000269c0, {0x2c3e385?, 0x0?}, 0xc0023a4000)
	/usr/local/go/src/testing/testing.go:1751 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0000269c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0000269c0, 0xc0014bcb00)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4851
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 5413 [chan receive, 25 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009a3200, 0xc0008a8000)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5383
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 4937 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00090f680)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0019ea9c0)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019ea9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0019ea9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0019ea9c0, 0xc000117280)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4916
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                                
goroutine 4917 [chan receive, 30 minutes]:
testing.(*testContext).waitParallel(0xc00090f680)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0019ea680)
	/usr/local/go/src/testing/testing.go:1485 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0019ea680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0019ea680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0019ea680, 0xc000116480)
	/usr/local/go/src/testing/testing.go:1690 +0xf4
created by testing.(*T).Run in goroutine 4916
	/usr/local/go/src/testing/testing.go:1743 +0x390

                                                
                                    

Test pass (175/228)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.26
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 3.69
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.15
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.63
22 TestOffline 83.87
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 204.32
31 TestAddons/serial/GCPAuth/Namespaces 2.37
34 TestAddons/parallel/Registry 16.17
36 TestAddons/parallel/InspektorGadget 11.9
40 TestAddons/parallel/Headlamp 18.75
41 TestAddons/parallel/CloudSpanner 5.59
43 TestAddons/parallel/NvidiaDevicePlugin 6.7
44 TestAddons/parallel/Yakd 11.84
46 TestCertOptions 49.57
47 TestCertExpiration 297.23
49 TestForceSystemdFlag 72.09
50 TestForceSystemdEnv 69.6
52 TestKVMDriverInstallOrUpdate 3.16
56 TestErrorSpam/setup 42.81
57 TestErrorSpam/start 0.38
58 TestErrorSpam/status 0.77
59 TestErrorSpam/pause 1.72
60 TestErrorSpam/unpause 1.79
61 TestErrorSpam/stop 5.23
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 84.76
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 54.12
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.42
73 TestFunctional/serial/CacheCmd/cache/add_local 1.47
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.91
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.12
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
81 TestFunctional/serial/ExtraConfig 32.01
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.54
84 TestFunctional/serial/LogsFileCmd 1.57
85 TestFunctional/serial/InvalidService 4.24
87 TestFunctional/parallel/ConfigCmd 0.38
88 TestFunctional/parallel/DashboardCmd 69.84
89 TestFunctional/parallel/DryRun 0.28
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 0.79
95 TestFunctional/parallel/ServiceCmdConnect 79.54
96 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/SSHCmd 0.43
100 TestFunctional/parallel/CpCmd 1.35
102 TestFunctional/parallel/FileSync 0.23
103 TestFunctional/parallel/CertSync 1.4
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
111 TestFunctional/parallel/License 0.16
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
116 TestFunctional/parallel/ImageCommands/ImageBuild 2.39
117 TestFunctional/parallel/ImageCommands/Setup 0.99
118 TestFunctional/parallel/Version/short 0.07
119 TestFunctional/parallel/Version/components 0.61
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.63
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
136 TestFunctional/parallel/ServiceCmd/DeployApp 71.17
137 TestFunctional/parallel/ServiceCmd/List 0.43
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
140 TestFunctional/parallel/ServiceCmd/Format 0.29
141 TestFunctional/parallel/ServiceCmd/URL 0.28
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
143 TestFunctional/parallel/ProfileCmd/profile_list 0.34
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
145 TestFunctional/parallel/MountCmd/any-port 56.36
146 TestFunctional/parallel/MountCmd/specific-port 1.79
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.37
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 197.27
158 TestMultiControlPlane/serial/DeployApp 6.61
159 TestMultiControlPlane/serial/PingHostFromPods 1.4
160 TestMultiControlPlane/serial/AddWorkerNode 57.56
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
163 TestMultiControlPlane/serial/CopyFile 13.41
169 TestMultiControlPlane/serial/DeleteSecondaryNode 13.97
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
172 TestMultiControlPlane/serial/RestartCluster 243.32
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
174 TestMultiControlPlane/serial/AddSecondaryNode 78.06
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
179 TestJSONOutput/start/Command 93
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.74
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.65
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.37
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.22
207 TestMainNoArgs 0.05
208 TestMinikubeProfile 88.45
211 TestMountStart/serial/StartWithMountFirst 30.3
212 TestMountStart/serial/VerifyMountFirst 0.4
213 TestMountStart/serial/StartWithMountSecond 27.86
214 TestMountStart/serial/VerifyMountSecond 0.39
215 TestMountStart/serial/DeleteFirst 0.71
216 TestMountStart/serial/VerifyMountPostDelete 0.4
217 TestMountStart/serial/Stop 1.29
218 TestMountStart/serial/RestartStopped 22.55
219 TestMountStart/serial/VerifyMountPostStop 0.39
222 TestMultiNode/serial/FreshStart2Nodes 109.86
223 TestMultiNode/serial/DeployApp2Nodes 3.99
224 TestMultiNode/serial/PingHostFrom2Pods 0.85
225 TestMultiNode/serial/AddNode 52.24
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.59
228 TestMultiNode/serial/CopyFile 7.51
229 TestMultiNode/serial/StopNode 2.45
230 TestMultiNode/serial/StartAfterStop 38.46
232 TestMultiNode/serial/DeleteNode 2.17
234 TestMultiNode/serial/RestartMultiNode 199.21
235 TestMultiNode/serial/ValidateNameConflict 45.59
242 TestScheduledStopUnix 119.31
246 TestRunningBinaryUpgrade 220.35
251 TestPause/serial/Start 56.05
252 TestStoppedBinaryUpgrade/Setup 0.42
253 TestStoppedBinaryUpgrade/Upgrade 179.73
254 TestPause/serial/SecondStartNoReconfiguration 84.12
262 TestPause/serial/Pause 0.98
263 TestPause/serial/VerifyStatus 0.3
264 TestPause/serial/Unpause 0.78
265 TestPause/serial/PauseAgain 0.98
266 TestPause/serial/DeletePaused 1.07
267 TestPause/serial/VerifyDeletedResources 0.79
269 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
270 TestNoKubernetes/serial/StartWithK8s 56.36
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
283 TestNoKubernetes/serial/StartWithStopK8s 47.06
284 TestNoKubernetes/serial/Start 50.23
285 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
286 TestNoKubernetes/serial/ProfileList 1.44
287 TestNoKubernetes/serial/Stop 1.36
288 TestNoKubernetes/serial/StartNoArgs 42.03
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
x
+
TestDownloadOnly/v1.20.0/json-events (7.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-243020 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-243020 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.26374267s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1007 11:31:28.687537  384271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1007 11:31:28.687654  384271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-243020
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-243020: exit status 85 (69.953982ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-243020 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |          |
	|         | -p download-only-243020        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:31:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:31:21.469041  384282 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:31:21.469208  384282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:31:21.469218  384282 out.go:358] Setting ErrFile to fd 2...
	I1007 11:31:21.469223  384282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:31:21.469426  384282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	W1007 11:31:21.469567  384282 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19763-377026/.minikube/config/config.json: open /home/jenkins/minikube-integration/19763-377026/.minikube/config/config.json: no such file or directory
	I1007 11:31:21.470125  384282 out.go:352] Setting JSON to true
	I1007 11:31:21.471143  384282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4427,"bootTime":1728296254,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:31:21.471214  384282 start.go:139] virtualization: kvm guest
	I1007 11:31:21.473663  384282 out.go:97] [download-only-243020] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:31:21.473832  384282 notify.go:220] Checking for updates...
	W1007 11:31:21.473831  384282 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 11:31:21.475178  384282 out.go:169] MINIKUBE_LOCATION=19763
	I1007 11:31:21.476736  384282 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:31:21.478469  384282 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:31:21.480107  384282 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:31:21.481707  384282 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1007 11:31:21.484675  384282 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 11:31:21.484948  384282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:31:21.518829  384282 out.go:97] Using the kvm2 driver based on user configuration
	I1007 11:31:21.518867  384282 start.go:297] selected driver: kvm2
	I1007 11:31:21.518876  384282 start.go:901] validating driver "kvm2" against <nil>
	I1007 11:31:21.519275  384282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:31:21.519404  384282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19763-377026/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1007 11:31:21.535895  384282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1007 11:31:21.535950  384282 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 11:31:21.536487  384282 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1007 11:31:21.536656  384282 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 11:31:21.536710  384282 cni.go:84] Creating CNI manager for ""
	I1007 11:31:21.536771  384282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1007 11:31:21.536782  384282 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 11:31:21.536865  384282 start.go:340] cluster config:
	{Name:download-only-243020 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-243020 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:31:21.537123  384282 iso.go:125] acquiring lock: {Name:mk7755c11ca5bc85d0aadd1f33672ba630051a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 11:31:21.539162  384282 out.go:97] Downloading VM boot image ...
	I1007 11:31:21.539222  384282 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19763-377026/.minikube/cache/iso/amd64/minikube-v1.34.0-1727108440-19696-amd64.iso
	I1007 11:31:24.149085  384282 out.go:97] Starting "download-only-243020" primary control-plane node in "download-only-243020" cluster
	I1007 11:31:24.149115  384282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 11:31:24.178450  384282 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1007 11:31:24.178486  384282 cache.go:56] Caching tarball of preloaded images
	I1007 11:31:24.178699  384282 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1007 11:31:24.180660  384282 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 11:31:24.180693  384282 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1007 11:31:24.210401  384282 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-243020 host does not exist
	  To start a cluster, run: "minikube start -p download-only-243020"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-243020
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-257663 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-257663 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.690297929s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1007 11:31:32.738994  384271 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1007 11:31:32.739055  384271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19763-377026/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-257663
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-257663: exit status 85 (70.892085ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-243020 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | -p download-only-243020        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| delete  | -p download-only-243020        | download-only-243020 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC | 07 Oct 24 11:31 UTC |
	| start   | -o=json --download-only        | download-only-257663 | jenkins | v1.34.0 | 07 Oct 24 11:31 UTC |                     |
	|         | -p download-only-257663        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 11:31:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 11:31:29.094759  384483 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:31:29.094897  384483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:31:29.094909  384483 out.go:358] Setting ErrFile to fd 2...
	I1007 11:31:29.094913  384483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:31:29.095162  384483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 11:31:29.095834  384483 out.go:352] Setting JSON to true
	I1007 11:31:29.096948  384483 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4435,"bootTime":1728296254,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:31:29.097084  384483 start.go:139] virtualization: kvm guest
	I1007 11:31:29.099343  384483 out.go:97] [download-only-257663] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:31:29.099554  384483 notify.go:220] Checking for updates...
	I1007 11:31:29.101088  384483 out.go:169] MINIKUBE_LOCATION=19763
	I1007 11:31:29.102654  384483 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:31:29.104310  384483 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:31:29.105828  384483 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:31:29.107287  384483 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-257663 host does not exist
	  To start a cluster, run: "minikube start -p download-only-257663"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-257663
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1007 11:31:33.380411  384271 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-827339 --alsologtostderr --binary-mirror http://127.0.0.1:38787 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-827339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-827339
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (83.87s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-596105 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-596105 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m22.733596069s)
helpers_test.go:175: Cleaning up "offline-crio-596105" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-596105
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-596105: (1.13897522s)
--- PASS: TestOffline (83.87s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-246818
addons_test.go:934: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-246818: exit status 85 (59.356595ms)

                                                
                                                
-- stdout --
	* Profile "addons-246818" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-246818"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-246818
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-246818: exit status 85 (58.73896ms)

                                                
                                                
-- stdout --
	* Profile "addons-246818" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-246818"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (204.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-246818 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-246818 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m24.322691277s)
--- PASS: TestAddons/Setup (204.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.37s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-246818 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-246818 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-246818 get secret gcp-auth -n new-namespace: exit status 1 (78.330468ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-246818 logs -l app=gcp-auth -n gcp-auth
I1007 11:34:59.004868  384271 retry.go:31] will retry after 2.09171869s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/10/07 11:34:57 GCP Auth Webhook started!
	2024/10/07 11:34:58 Ready to marshal response ...
	2024/10/07 11:34:58 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-246818 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.37s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.4558ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-pdbhh" [0abb32c0-d3dc-447d-a3b9-d672a6f088ff] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004186878s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nczxq" [f47e8fd0-0149-4ade-8c43-90e4eeb9b7cf] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004755747s
addons_test.go:331: (dbg) Run:  kubectl --context addons-246818 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-246818 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-246818 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.289956811s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 ip
2024/10/07 11:43:28 [DEBUG] GET http://192.168.39.141:5000
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.17s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pll9f" [2029773a-acc0-46c9-8a8a-142ff86f64d5] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004665308s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-246818 addons disable inspektor-gadget --alsologtostderr -v=1: (5.89699194s)
--- PASS: TestAddons/parallel/InspektorGadget (11.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-246818 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-4qrzp" [e0f732c6-bd9d-4aac-a3cf-1450c3556f20] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-4qrzp" [e0f732c6-bd9d-4aac-a3cf-1450c3556f20] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-4qrzp" [e0f732c6-bd9d-4aac-a3cf-1450c3556f20] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003719184s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable headlamp --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-amd64 -p addons-246818 addons disable headlamp --alsologtostderr -v=1: (5.834669187s)
--- PASS: TestAddons/parallel/Headlamp (18.75s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-zg2hq" [ee95e639-975d-4172-9950-2f0bcdf275d7] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00431673s
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-8tqmv" [69715854-4ded-41a3-83c7-1c8c927935d3] Running
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006794615s
addons_test.go:961: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-246818
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-xxxxq" [1f85db51-068a-4e86-ad95-899e437569b7] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005248865s
addons_test.go:973: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable yakd --alsologtostderr -v=1
addons_test.go:973: (dbg) Done: out/minikube-linux-amd64 -p addons-246818 addons disable yakd --alsologtostderr -v=1: (5.835874979s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestCertOptions (49.57s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-831789 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-831789 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (48.044196069s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-831789 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-831789 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-831789 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-831789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-831789
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-831789: (1.022758794s)
--- PASS: TestCertOptions (49.57s)

                                                
                                    
x
+
TestCertExpiration (297.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-926690 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-926690 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m12.490053871s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-926690 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-926690 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (43.672698932s)
helpers_test.go:175: Cleaning up "cert-expiration-926690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-926690
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-926690: (1.062046858s)
--- PASS: TestCertExpiration (297.23s)

                                                
                                    
x
+
TestForceSystemdFlag (72.09s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-971127 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1007 13:01:42.462334  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-971127 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.051609666s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-971127 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-971127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-971127
--- PASS: TestForceSystemdFlag (72.09s)

                                                
                                    
x
+
TestForceSystemdEnv (69.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-869976 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-869976 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.557535539s)
helpers_test.go:175: Cleaning up "force-systemd-env-869976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-869976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-869976: (1.043940287s)
--- PASS: TestForceSystemdEnv (69.60s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.16s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1007 13:00:49.195625  384271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 13:00:49.195859  384271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1007 13:00:49.231243  384271 install.go:62] docker-machine-driver-kvm2: exit status 1
W1007 13:00:49.231725  384271 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1007 13:00:49.231806  384271 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2851628532/001/docker-machine-driver-kvm2
I1007 13:00:49.517296  384271 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2851628532/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60] Decompressors:map[bz2:0xc0006199d0 gz:0xc0006199d8 tar:0xc000619980 tar.bz2:0xc000619990 tar.gz:0xc0006199a0 tar.xz:0xc0006199b0 tar.zst:0xc0006199c0 tbz2:0xc000619990 tgz:0xc0006199a0 txz:0xc0006199b0 tzst:0xc0006199c0 xz:0xc0006199e0 zip:0xc0006199f0 zst:0xc0006199e8] Getters:map[file:0xc001996540 http:0xc0005ea9b0 https:0xc0005eaa00] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1007 13:00:49.517368  384271 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2851628532/001/docker-machine-driver-kvm2
I1007 13:00:50.900471  384271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 13:00:50.900610  384271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1007 13:00:50.932539  384271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1007 13:00:50.932584  384271 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1007 13:00:50.932660  384271 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1007 13:00:50.932695  384271 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2851628532/002/docker-machine-driver-kvm2
I1007 13:00:51.095673  384271 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2851628532/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60 0x52f3c60] Decompressors:map[bz2:0xc0006199d0 gz:0xc0006199d8 tar:0xc000619980 tar.bz2:0xc000619990 tar.gz:0xc0006199a0 tar.xz:0xc0006199b0 tar.zst:0xc0006199c0 tbz2:0xc000619990 tgz:0xc0006199a0 txz:0xc0006199b0 tzst:0xc0006199c0 xz:0xc0006199e0 zip:0xc0006199f0 zst:0xc0006199e8] Getters:map[file:0xc001997270 http:0xc000776500 https:0xc000776550] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1007 13:00:51.095738  384271 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2851628532/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.16s)

                                                
                                    
x
+
TestErrorSpam/setup (42.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-636540 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-636540 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-636540 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-636540 --driver=kvm2  --container-runtime=crio: (42.809021164s)
--- PASS: TestErrorSpam/setup (42.81s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (5.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 stop: (2.322380269s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-636540 --log_dir /tmp/nospam-636540 stop: (1.925852452s)
--- PASS: TestErrorSpam/stop (5.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19763-377026/.minikube/files/etc/test/nested/copy/384271/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790363 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-790363 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m24.760223251s)
--- PASS: TestFunctional/serial/StartWithProxy (84.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.12s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1007 11:55:00.758611  384271 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790363 --alsologtostderr -v=8
E1007 11:55:01.380731  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:01.387044  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:01.398579  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:01.420073  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:01.462327  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:01.544409  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:01.706047  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:02.027875  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:02.669447  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:03.951122  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:06.514139  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:11.636359  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:21.878056  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 11:55:42.360235  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-790363 --alsologtostderr -v=8: (54.120807032s)
functional_test.go:663: soft start took 54.12172093s for "functional-790363" cluster.
I1007 11:55:54.879936  384271 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (54.12s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-790363 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-790363 cache add registry.k8s.io/pause:3.1: (1.096021288s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-790363 cache add registry.k8s.io/pause:3.3: (1.199872404s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-790363 cache add registry.k8s.io/pause:latest: (1.119993353s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-790363 /tmp/TestFunctionalserialCacheCmdcacheadd_local512943456/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 cache add minikube-local-cache-test:functional-790363
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-790363 cache add minikube-local-cache-test:functional-790363: (1.142826527s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 cache delete minikube-local-cache-test:functional-790363
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-790363
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790363 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.592155ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-790363 cache reload: (1.182221337s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 kubectl -- --context functional-790363 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-790363 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790363 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1007 11:56:23.322679  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-790363 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.011797312s)
functional_test.go:761: restart took 32.011951086s for "functional-790363" cluster.
I1007 11:56:34.527691  384271 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (32.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-790363 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-790363 logs: (1.53926317s)
--- PASS: TestFunctional/serial/LogsCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 logs --file /tmp/TestFunctionalserialLogsFileCmd1029147127/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-790363 logs --file /tmp/TestFunctionalserialLogsFileCmd1029147127/001/logs.txt: (1.565496627s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-790363 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-790363
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-790363: exit status 115 (295.856867ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.166:32225 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-790363 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790363 config get cpus: exit status 14 (58.91896ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790363 config get cpus: exit status 14 (64.663689ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (69.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-790363 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-790363 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 398605: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (69.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790363 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-790363 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.296492ms)

                                                
                                                
-- stdout --
	* [functional-790363] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:58:04.451293  398513 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:58:04.451402  398513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:58:04.451411  398513 out.go:358] Setting ErrFile to fd 2...
	I1007 11:58:04.451415  398513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:58:04.451608  398513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 11:58:04.452130  398513 out.go:352] Setting JSON to false
	I1007 11:58:04.453121  398513 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6030,"bootTime":1728296254,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:58:04.453223  398513 start.go:139] virtualization: kvm guest
	I1007 11:58:04.455320  398513 out.go:177] * [functional-790363] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1007 11:58:04.457093  398513 notify.go:220] Checking for updates...
	I1007 11:58:04.457122  398513 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 11:58:04.458394  398513 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:58:04.459783  398513 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:58:04.461378  398513 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:58:04.462636  398513 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:58:04.463849  398513 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:58:04.465659  398513 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:58:04.466242  398513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:58:04.466340  398513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:58:04.482247  398513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43063
	I1007 11:58:04.482668  398513 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:58:04.483229  398513 main.go:141] libmachine: Using API Version  1
	I1007 11:58:04.483245  398513 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:58:04.483628  398513 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:58:04.483865  398513 main.go:141] libmachine: (functional-790363) Calling .DriverName
	I1007 11:58:04.484136  398513 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:58:04.484455  398513 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:58:04.484501  398513 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:58:04.500020  398513 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43775
	I1007 11:58:04.500431  398513 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:58:04.500938  398513 main.go:141] libmachine: Using API Version  1
	I1007 11:58:04.500963  398513 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:58:04.501351  398513 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:58:04.501550  398513 main.go:141] libmachine: (functional-790363) Calling .DriverName
	I1007 11:58:04.535760  398513 out.go:177] * Using the kvm2 driver based on existing profile
	I1007 11:58:04.537039  398513 start.go:297] selected driver: kvm2
	I1007 11:58:04.537051  398513 start.go:901] validating driver "kvm2" against &{Name:functional-790363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-790363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:58:04.537170  398513 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:58:04.539067  398513 out.go:201] 
	W1007 11:58:04.540354  398513 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1007 11:58:04.541496  398513 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790363 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-790363 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-790363 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (154.442143ms)

                                                
                                                
-- stdout --
	* [functional-790363] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 11:58:04.305103  398468 out.go:345] Setting OutFile to fd 1 ...
	I1007 11:58:04.305270  398468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:58:04.305284  398468 out.go:358] Setting ErrFile to fd 2...
	I1007 11:58:04.305292  398468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 11:58:04.305757  398468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 11:58:04.306485  398468 out.go:352] Setting JSON to false
	I1007 11:58:04.308466  398468 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6030,"bootTime":1728296254,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1007 11:58:04.308681  398468 start.go:139] virtualization: kvm guest
	I1007 11:58:04.311504  398468 out.go:177] * [functional-790363] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1007 11:58:04.313027  398468 notify.go:220] Checking for updates...
	I1007 11:58:04.313270  398468 out.go:177]   - MINIKUBE_LOCATION=19763
	I1007 11:58:04.315083  398468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 11:58:04.316729  398468 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	I1007 11:58:04.318153  398468 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	I1007 11:58:04.319530  398468 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1007 11:58:04.320830  398468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 11:58:04.322521  398468 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 11:58:04.322929  398468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:58:04.323002  398468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:58:04.339982  398468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33319
	I1007 11:58:04.340474  398468 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:58:04.341265  398468 main.go:141] libmachine: Using API Version  1
	I1007 11:58:04.341308  398468 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:58:04.341665  398468 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:58:04.341875  398468 main.go:141] libmachine: (functional-790363) Calling .DriverName
	I1007 11:58:04.342181  398468 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 11:58:04.342732  398468 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 11:58:04.342787  398468 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 11:58:04.358912  398468 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37793
	I1007 11:58:04.359463  398468 main.go:141] libmachine: () Calling .GetVersion
	I1007 11:58:04.359991  398468 main.go:141] libmachine: Using API Version  1
	I1007 11:58:04.360016  398468 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 11:58:04.360433  398468 main.go:141] libmachine: () Calling .GetMachineName
	I1007 11:58:04.360611  398468 main.go:141] libmachine: (functional-790363) Calling .DriverName
	I1007 11:58:04.396410  398468 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1007 11:58:04.397685  398468 start.go:297] selected driver: kvm2
	I1007 11:58:04.397700  398468 start.go:901] validating driver "kvm2" against &{Name:functional-790363 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-790363 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.166 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 11:58:04.397850  398468 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 11:58:04.399881  398468 out.go:201] 
	W1007 11:58:04.401317  398468 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1007 11:58:04.402671  398468 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (79.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-790363 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-790363 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-nnv6b" [fa888d7c-ba75-4424-b0f9-0b53ef6e15d2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-nnv6b" [fa888d7c-ba75-4424-b0f9-0b53ef6e15d2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 1m19.005707188s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.166:30722
functional_test.go:1675: http://192.168.39.166:30722: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-nnv6b

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.166:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.166:30722
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (79.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh -n functional-790363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 cp functional-790363:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd758578742/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh -n functional-790363 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh -n functional-790363 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/384271/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "sudo cat /etc/test/nested/copy/384271/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/384271.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "sudo cat /etc/ssl/certs/384271.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/384271.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "sudo cat /usr/share/ca-certificates/384271.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3842712.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "sudo cat /etc/ssl/certs/3842712.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3842712.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "sudo cat /usr/share/ca-certificates/3842712.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-790363 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790363 ssh "sudo systemctl is-active docker": exit status 1 (218.845231ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790363 ssh "sudo systemctl is-active containerd": exit status 1 (253.035925ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-790363 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-790363
localhost/kicbase/echo-server:functional-790363
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-790363 image ls --format short --alsologtostderr:
I1007 11:59:03.993394  399364 out.go:345] Setting OutFile to fd 1 ...
I1007 11:59:03.993519  399364 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 11:59:03.993529  399364 out.go:358] Setting ErrFile to fd 2...
I1007 11:59:03.993534  399364 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 11:59:03.993730  399364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
I1007 11:59:03.994359  399364 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 11:59:03.994458  399364 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 11:59:03.994897  399364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 11:59:03.994982  399364 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 11:59:04.015663  399364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37383
I1007 11:59:04.016189  399364 main.go:141] libmachine: () Calling .GetVersion
I1007 11:59:04.016852  399364 main.go:141] libmachine: Using API Version  1
I1007 11:59:04.016877  399364 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 11:59:04.017222  399364 main.go:141] libmachine: () Calling .GetMachineName
I1007 11:59:04.017461  399364 main.go:141] libmachine: (functional-790363) Calling .GetState
I1007 11:59:04.019484  399364 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 11:59:04.019530  399364 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 11:59:04.035658  399364 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
I1007 11:59:04.036100  399364 main.go:141] libmachine: () Calling .GetVersion
I1007 11:59:04.036776  399364 main.go:141] libmachine: Using API Version  1
I1007 11:59:04.036810  399364 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 11:59:04.037164  399364 main.go:141] libmachine: () Calling .GetMachineName
I1007 11:59:04.037432  399364 main.go:141] libmachine: (functional-790363) Calling .DriverName
I1007 11:59:04.037647  399364 ssh_runner.go:195] Run: systemctl --version
I1007 11:59:04.037678  399364 main.go:141] libmachine: (functional-790363) Calling .GetSSHHostname
I1007 11:59:04.040845  399364 main.go:141] libmachine: (functional-790363) DBG | domain functional-790363 has defined MAC address 52:54:00:e7:bf:fa in network mk-functional-790363
I1007 11:59:04.041243  399364 main.go:141] libmachine: (functional-790363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:bf:fa", ip: ""} in network mk-functional-790363: {Iface:virbr1 ExpiryTime:2024-10-07 12:53:50 +0000 UTC Type:0 Mac:52:54:00:e7:bf:fa Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:functional-790363 Clientid:01:52:54:00:e7:bf:fa}
I1007 11:59:04.041274  399364 main.go:141] libmachine: (functional-790363) DBG | domain functional-790363 has defined IP address 192.168.39.166 and MAC address 52:54:00:e7:bf:fa in network mk-functional-790363
I1007 11:59:04.041437  399364 main.go:141] libmachine: (functional-790363) Calling .GetSSHPort
I1007 11:59:04.041634  399364 main.go:141] libmachine: (functional-790363) Calling .GetSSHKeyPath
I1007 11:59:04.041773  399364 main.go:141] libmachine: (functional-790363) Calling .GetSSHUsername
I1007 11:59:04.041884  399364 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/functional-790363/id_rsa Username:docker}
I1007 11:59:04.118174  399364 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 11:59:04.168125  399364 main.go:141] libmachine: Making call to close driver server
I1007 11:59:04.168140  399364 main.go:141] libmachine: (functional-790363) Calling .Close
I1007 11:59:04.168509  399364 main.go:141] libmachine: Successfully made call to close driver server
I1007 11:59:04.168605  399364 main.go:141] libmachine: (functional-790363) DBG | Closing plugin on server side
I1007 11:59:04.168642  399364 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 11:59:04.168654  399364 main.go:141] libmachine: Making call to close driver server
I1007 11:59:04.168667  399364 main.go:141] libmachine: (functional-790363) Calling .Close
I1007 11:59:04.168938  399364 main.go:141] libmachine: Successfully made call to close driver server
I1007 11:59:04.168960  399364 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 11:59:04.168968  399364 main.go:141] libmachine: (functional-790363) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-790363 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| localhost/my-image                      | functional-790363  | 7b64985e663a4 | 1.47MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| localhost/minikube-local-cache-test     | functional-790363  | 7c2763a4283f1 | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| localhost/kicbase/echo-server           | functional-790363  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-790363 image ls --format table --alsologtostderr:
I1007 11:59:07.042168  399532 out.go:345] Setting OutFile to fd 1 ...
I1007 11:59:07.042318  399532 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 11:59:07.042330  399532 out.go:358] Setting ErrFile to fd 2...
I1007 11:59:07.042336  399532 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 11:59:07.042521  399532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
I1007 11:59:07.043149  399532 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 11:59:07.043275  399532 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 11:59:07.043725  399532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 11:59:07.043787  399532 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 11:59:07.059423  399532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45845
I1007 11:59:07.060210  399532 main.go:141] libmachine: () Calling .GetVersion
I1007 11:59:07.061242  399532 main.go:141] libmachine: Using API Version  1
I1007 11:59:07.061365  399532 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 11:59:07.061744  399532 main.go:141] libmachine: () Calling .GetMachineName
I1007 11:59:07.061973  399532 main.go:141] libmachine: (functional-790363) Calling .GetState
I1007 11:59:07.064149  399532 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 11:59:07.064196  399532 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 11:59:07.080561  399532 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37691
I1007 11:59:07.081033  399532 main.go:141] libmachine: () Calling .GetVersion
I1007 11:59:07.081583  399532 main.go:141] libmachine: Using API Version  1
I1007 11:59:07.081608  399532 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 11:59:07.081924  399532 main.go:141] libmachine: () Calling .GetMachineName
I1007 11:59:07.082111  399532 main.go:141] libmachine: (functional-790363) Calling .DriverName
I1007 11:59:07.082352  399532 ssh_runner.go:195] Run: systemctl --version
I1007 11:59:07.082382  399532 main.go:141] libmachine: (functional-790363) Calling .GetSSHHostname
I1007 11:59:07.084840  399532 main.go:141] libmachine: (functional-790363) DBG | domain functional-790363 has defined MAC address 52:54:00:e7:bf:fa in network mk-functional-790363
I1007 11:59:07.085192  399532 main.go:141] libmachine: (functional-790363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:bf:fa", ip: ""} in network mk-functional-790363: {Iface:virbr1 ExpiryTime:2024-10-07 12:53:50 +0000 UTC Type:0 Mac:52:54:00:e7:bf:fa Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:functional-790363 Clientid:01:52:54:00:e7:bf:fa}
I1007 11:59:07.085220  399532 main.go:141] libmachine: (functional-790363) DBG | domain functional-790363 has defined IP address 192.168.39.166 and MAC address 52:54:00:e7:bf:fa in network mk-functional-790363
I1007 11:59:07.085392  399532 main.go:141] libmachine: (functional-790363) Calling .GetSSHPort
I1007 11:59:07.085575  399532 main.go:141] libmachine: (functional-790363) Calling .GetSSHKeyPath
I1007 11:59:07.085739  399532 main.go:141] libmachine: (functional-790363) Calling .GetSSHUsername
I1007 11:59:07.085884  399532 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/functional-790363/id_rsa Username:docker}
I1007 11:59:07.162141  399532 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 11:59:07.201004  399532 main.go:141] libmachine: Making call to close driver server
I1007 11:59:07.201022  399532 main.go:141] libmachine: (functional-790363) Calling .Close
I1007 11:59:07.201352  399532 main.go:141] libmachine: Successfully made call to close driver server
I1007 11:59:07.201373  399532 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 11:59:07.201391  399532 main.go:141] libmachine: Making call to close driver server
I1007 11:59:07.201399  399532 main.go:141] libmachine: (functional-790363) Calling .Close
I1007 11:59:07.201404  399532 main.go:141] libmachine: (functional-790363) DBG | Closing plugin on server side
I1007 11:59:07.201641  399532 main.go:141] libmachine: Successfully made call to close driver server
I1007 11:59:07.201657  399532 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 11:59:07.201675  399532 main.go:141] libmachine: (functional-790363) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-790363 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"7c2763a4283f14ef3b2f53f0cc998bf570c4233a5278c461735ef57fee313ba7","repoDigests":["localhost/minikube-local-cache-test@sha256:7b9e6ca3212513d79c81be2051b566af8a5fc46b4362b67f662d5b608bc683ff"],"repoTags":["localhost/minikube-local-cache-test:functional-790363"],"size":"3330"},{"id":"7b64985e663a48a83a0d0f22fe17346b8fdcb2bd8fbc95dcf1252d7c2b6ff171","repoDigests":["localhost/my-image@sha256:2e5e60db044745391430d769eb8fb12880f836b8b6b034191e58aa942626c887"],"repoTags":["localhost/my-image:functional-790363"],"size":"1468599"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controlle
r-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e40054220
2d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTa
gs":["localhost/kicbase/echo-server:functional-790363"],"size":"4943877"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92
733849"},{"id":"4cb3bae1a740c2d427348a56d1db8ae55cb83f3f2bc6882bddb8dff50b967003","repoDigests":["docker.io/library/29764152e9d2d6b2bdbc166b10295f2e77d40cdbd8494f25da494e2cd4b87987-tmp@sha256:88c69bca86dfd1dced0925a11e32c704fb9604eb70650bd93d528b5d1b49847e"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237
600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f
95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-790363 image ls --format json --alsologtostderr:
I1007 11:59:06.831054  399508 out.go:345] Setting OutFile to fd 1 ...
I1007 11:59:06.831206  399508 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 11:59:06.831218  399508 out.go:358] Setting ErrFile to fd 2...
I1007 11:59:06.831225  399508 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 11:59:06.831421  399508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
I1007 11:59:06.832011  399508 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 11:59:06.832108  399508 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 11:59:06.832477  399508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 11:59:06.832549  399508 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 11:59:06.847987  399508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
I1007 11:59:06.848575  399508 main.go:141] libmachine: () Calling .GetVersion
I1007 11:59:06.849259  399508 main.go:141] libmachine: Using API Version  1
I1007 11:59:06.849284  399508 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 11:59:06.849616  399508 main.go:141] libmachine: () Calling .GetMachineName
I1007 11:59:06.849809  399508 main.go:141] libmachine: (functional-790363) Calling .GetState
I1007 11:59:06.851426  399508 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 11:59:06.851471  399508 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 11:59:06.868114  399508 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
I1007 11:59:06.868543  399508 main.go:141] libmachine: () Calling .GetVersion
I1007 11:59:06.869075  399508 main.go:141] libmachine: Using API Version  1
I1007 11:59:06.869115  399508 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 11:59:06.869482  399508 main.go:141] libmachine: () Calling .GetMachineName
I1007 11:59:06.869677  399508 main.go:141] libmachine: (functional-790363) Calling .DriverName
I1007 11:59:06.869900  399508 ssh_runner.go:195] Run: systemctl --version
I1007 11:59:06.869934  399508 main.go:141] libmachine: (functional-790363) Calling .GetSSHHostname
I1007 11:59:06.872793  399508 main.go:141] libmachine: (functional-790363) DBG | domain functional-790363 has defined MAC address 52:54:00:e7:bf:fa in network mk-functional-790363
I1007 11:59:06.873180  399508 main.go:141] libmachine: (functional-790363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:bf:fa", ip: ""} in network mk-functional-790363: {Iface:virbr1 ExpiryTime:2024-10-07 12:53:50 +0000 UTC Type:0 Mac:52:54:00:e7:bf:fa Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:functional-790363 Clientid:01:52:54:00:e7:bf:fa}
I1007 11:59:06.873217  399508 main.go:141] libmachine: (functional-790363) DBG | domain functional-790363 has defined IP address 192.168.39.166 and MAC address 52:54:00:e7:bf:fa in network mk-functional-790363
I1007 11:59:06.873415  399508 main.go:141] libmachine: (functional-790363) Calling .GetSSHPort
I1007 11:59:06.873584  399508 main.go:141] libmachine: (functional-790363) Calling .GetSSHKeyPath
I1007 11:59:06.873737  399508 main.go:141] libmachine: (functional-790363) Calling .GetSSHUsername
I1007 11:59:06.873877  399508 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/functional-790363/id_rsa Username:docker}
I1007 11:59:06.951903  399508 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 11:59:06.989102  399508 main.go:141] libmachine: Making call to close driver server
I1007 11:59:06.989118  399508 main.go:141] libmachine: (functional-790363) Calling .Close
I1007 11:59:06.989433  399508 main.go:141] libmachine: (functional-790363) DBG | Closing plugin on server side
I1007 11:59:06.989483  399508 main.go:141] libmachine: Successfully made call to close driver server
I1007 11:59:06.989491  399508 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 11:59:06.989505  399508 main.go:141] libmachine: Making call to close driver server
I1007 11:59:06.989516  399508 main.go:141] libmachine: (functional-790363) Calling .Close
I1007 11:59:06.989737  399508 main.go:141] libmachine: Successfully made call to close driver server
I1007 11:59:06.989763  399508 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 11:59:06.989765  399508 main.go:141] libmachine: (functional-790363) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-790363 image ls --format yaml --alsologtostderr:
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-790363
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 7c2763a4283f14ef3b2f53f0cc998bf570c4233a5278c461735ef57fee313ba7
repoDigests:
- localhost/minikube-local-cache-test@sha256:7b9e6ca3212513d79c81be2051b566af8a5fc46b4362b67f662d5b608bc683ff
repoTags:
- localhost/minikube-local-cache-test:functional-790363
size: "3330"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-790363 image ls --format yaml --alsologtostderr:
I1007 11:59:04.221840  399387 out.go:345] Setting OutFile to fd 1 ...
I1007 11:59:04.222131  399387 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 11:59:04.222142  399387 out.go:358] Setting ErrFile to fd 2...
I1007 11:59:04.222147  399387 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 11:59:04.222330  399387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
I1007 11:59:04.222892  399387 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 11:59:04.223019  399387 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 11:59:04.223474  399387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 11:59:04.223526  399387 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 11:59:04.239350  399387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43063
I1007 11:59:04.239865  399387 main.go:141] libmachine: () Calling .GetVersion
I1007 11:59:04.240495  399387 main.go:141] libmachine: Using API Version  1
I1007 11:59:04.240521  399387 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 11:59:04.240866  399387 main.go:141] libmachine: () Calling .GetMachineName
I1007 11:59:04.241082  399387 main.go:141] libmachine: (functional-790363) Calling .GetState
I1007 11:59:04.242968  399387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 11:59:04.243023  399387 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 11:59:04.258708  399387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40969
I1007 11:59:04.259148  399387 main.go:141] libmachine: () Calling .GetVersion
I1007 11:59:04.259705  399387 main.go:141] libmachine: Using API Version  1
I1007 11:59:04.259733  399387 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 11:59:04.260045  399387 main.go:141] libmachine: () Calling .GetMachineName
I1007 11:59:04.260245  399387 main.go:141] libmachine: (functional-790363) Calling .DriverName
I1007 11:59:04.260482  399387 ssh_runner.go:195] Run: systemctl --version
I1007 11:59:04.260506  399387 main.go:141] libmachine: (functional-790363) Calling .GetSSHHostname
I1007 11:59:04.263375  399387 main.go:141] libmachine: (functional-790363) DBG | domain functional-790363 has defined MAC address 52:54:00:e7:bf:fa in network mk-functional-790363
I1007 11:59:04.263792  399387 main.go:141] libmachine: (functional-790363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:bf:fa", ip: ""} in network mk-functional-790363: {Iface:virbr1 ExpiryTime:2024-10-07 12:53:50 +0000 UTC Type:0 Mac:52:54:00:e7:bf:fa Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:functional-790363 Clientid:01:52:54:00:e7:bf:fa}
I1007 11:59:04.263829  399387 main.go:141] libmachine: (functional-790363) DBG | domain functional-790363 has defined IP address 192.168.39.166 and MAC address 52:54:00:e7:bf:fa in network mk-functional-790363
I1007 11:59:04.263970  399387 main.go:141] libmachine: (functional-790363) Calling .GetSSHPort
I1007 11:59:04.264153  399387 main.go:141] libmachine: (functional-790363) Calling .GetSSHKeyPath
I1007 11:59:04.264286  399387 main.go:141] libmachine: (functional-790363) Calling .GetSSHUsername
I1007 11:59:04.264543  399387 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/functional-790363/id_rsa Username:docker}
I1007 11:59:04.342161  399387 ssh_runner.go:195] Run: sudo crictl images --output json
I1007 11:59:04.385326  399387 main.go:141] libmachine: Making call to close driver server
I1007 11:59:04.385345  399387 main.go:141] libmachine: (functional-790363) Calling .Close
I1007 11:59:04.385656  399387 main.go:141] libmachine: Successfully made call to close driver server
I1007 11:59:04.385678  399387 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 11:59:04.385691  399387 main.go:141] libmachine: Making call to close driver server
I1007 11:59:04.385700  399387 main.go:141] libmachine: (functional-790363) Calling .Close
I1007 11:59:04.385699  399387 main.go:141] libmachine: (functional-790363) DBG | Closing plugin on server side
I1007 11:59:04.385967  399387 main.go:141] libmachine: Successfully made call to close driver server
I1007 11:59:04.385985  399387 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790363 ssh pgrep buildkitd: exit status 1 (191.388881ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image build -t localhost/my-image:functional-790363 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-790363 image build -t localhost/my-image:functional-790363 testdata/build --alsologtostderr: (1.971524435s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-790363 image build -t localhost/my-image:functional-790363 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4cb3bae1a74
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-790363
--> 7b64985e663
Successfully tagged localhost/my-image:functional-790363
7b64985e663a48a83a0d0f22fe17346b8fdcb2bd8fbc95dcf1252d7c2b6ff171
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-790363 image build -t localhost/my-image:functional-790363 testdata/build --alsologtostderr:
I1007 11:59:04.630799  399459 out.go:345] Setting OutFile to fd 1 ...
I1007 11:59:04.630922  399459 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 11:59:04.630931  399459 out.go:358] Setting ErrFile to fd 2...
I1007 11:59:04.630936  399459 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 11:59:04.631151  399459 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
I1007 11:59:04.631809  399459 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 11:59:04.632459  399459 config.go:182] Loaded profile config "functional-790363": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1007 11:59:04.632829  399459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 11:59:04.632874  399459 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 11:59:04.648514  399459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
I1007 11:59:04.649127  399459 main.go:141] libmachine: () Calling .GetVersion
I1007 11:59:04.649747  399459 main.go:141] libmachine: Using API Version  1
I1007 11:59:04.649771  399459 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 11:59:04.650075  399459 main.go:141] libmachine: () Calling .GetMachineName
I1007 11:59:04.650249  399459 main.go:141] libmachine: (functional-790363) Calling .GetState
I1007 11:59:04.651955  399459 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1007 11:59:04.652006  399459 main.go:141] libmachine: Launching plugin server for driver kvm2
I1007 11:59:04.667464  399459 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36883
I1007 11:59:04.667979  399459 main.go:141] libmachine: () Calling .GetVersion
I1007 11:59:04.668693  399459 main.go:141] libmachine: Using API Version  1
I1007 11:59:04.668741  399459 main.go:141] libmachine: () Calling .SetConfigRaw
I1007 11:59:04.669127  399459 main.go:141] libmachine: () Calling .GetMachineName
I1007 11:59:04.669353  399459 main.go:141] libmachine: (functional-790363) Calling .DriverName
I1007 11:59:04.669604  399459 ssh_runner.go:195] Run: systemctl --version
I1007 11:59:04.669649  399459 main.go:141] libmachine: (functional-790363) Calling .GetSSHHostname
I1007 11:59:04.672866  399459 main.go:141] libmachine: (functional-790363) DBG | domain functional-790363 has defined MAC address 52:54:00:e7:bf:fa in network mk-functional-790363
I1007 11:59:04.673338  399459 main.go:141] libmachine: (functional-790363) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e7:bf:fa", ip: ""} in network mk-functional-790363: {Iface:virbr1 ExpiryTime:2024-10-07 12:53:50 +0000 UTC Type:0 Mac:52:54:00:e7:bf:fa Iaid: IPaddr:192.168.39.166 Prefix:24 Hostname:functional-790363 Clientid:01:52:54:00:e7:bf:fa}
I1007 11:59:04.673368  399459 main.go:141] libmachine: (functional-790363) DBG | domain functional-790363 has defined IP address 192.168.39.166 and MAC address 52:54:00:e7:bf:fa in network mk-functional-790363
I1007 11:59:04.673514  399459 main.go:141] libmachine: (functional-790363) Calling .GetSSHPort
I1007 11:59:04.673689  399459 main.go:141] libmachine: (functional-790363) Calling .GetSSHKeyPath
I1007 11:59:04.673842  399459 main.go:141] libmachine: (functional-790363) Calling .GetSSHUsername
I1007 11:59:04.673981  399459 sshutil.go:53] new ssh client: &{IP:192.168.39.166 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/functional-790363/id_rsa Username:docker}
I1007 11:59:04.753911  399459 build_images.go:161] Building image from path: /tmp/build.2583867218.tar
I1007 11:59:04.753985  399459 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1007 11:59:04.770258  399459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2583867218.tar
I1007 11:59:04.775037  399459 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2583867218.tar: stat -c "%s %y" /var/lib/minikube/build/build.2583867218.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2583867218.tar': No such file or directory
I1007 11:59:04.775074  399459 ssh_runner.go:362] scp /tmp/build.2583867218.tar --> /var/lib/minikube/build/build.2583867218.tar (3072 bytes)
I1007 11:59:04.826843  399459 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2583867218
I1007 11:59:04.843925  399459 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2583867218 -xf /var/lib/minikube/build/build.2583867218.tar
I1007 11:59:04.861514  399459 crio.go:315] Building image: /var/lib/minikube/build/build.2583867218
I1007 11:59:04.861601  399459 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-790363 /var/lib/minikube/build/build.2583867218 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1007 11:59:06.526049  399459 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-790363 /var/lib/minikube/build/build.2583867218 --cgroup-manager=cgroupfs: (1.664406341s)
I1007 11:59:06.526250  399459 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2583867218
I1007 11:59:06.537321  399459 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2583867218.tar
I1007 11:59:06.548191  399459 build_images.go:217] Built localhost/my-image:functional-790363 from /tmp/build.2583867218.tar
I1007 11:59:06.548242  399459 build_images.go:133] succeeded building to: functional-790363
I1007 11:59:06.548248  399459 build_images.go:134] failed building to: 
I1007 11:59:06.548283  399459 main.go:141] libmachine: Making call to close driver server
I1007 11:59:06.548296  399459 main.go:141] libmachine: (functional-790363) Calling .Close
I1007 11:59:06.548598  399459 main.go:141] libmachine: Successfully made call to close driver server
I1007 11:59:06.548619  399459 main.go:141] libmachine: Making call to close connection to plugin binary
I1007 11:59:06.548628  399459 main.go:141] libmachine: Making call to close driver server
I1007 11:59:06.548633  399459 main.go:141] libmachine: (functional-790363) DBG | Closing plugin on server side
I1007 11:59:06.548636  399459 main.go:141] libmachine: (functional-790363) Calling .Close
I1007 11:59:06.548937  399459 main.go:141] libmachine: (functional-790363) DBG | Closing plugin on server side
I1007 11:59:06.548963  399459 main.go:141] libmachine: Successfully made call to close driver server
I1007 11:59:06.549000  399459 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-790363
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image load --daemon kicbase/echo-server:functional-790363 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-790363 image load --daemon kicbase/echo-server:functional-790363 --alsologtostderr: (1.365677501s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image load --daemon kicbase/echo-server:functional-790363 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-790363
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image load --daemon kicbase/echo-server:functional-790363 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image save kicbase/echo-server:functional-790363 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image rm kicbase/echo-server:functional-790363 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-790363
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 image save --daemon kicbase/echo-server:functional-790363 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-790363
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (71.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-790363 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-790363 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-rzmtr" [2dd3c63a-e3a9-48e4-b35c-6eeb69b38295] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-rzmtr" [2dd3c63a-e3a9-48e4-b35c-6eeb69b38295] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 1m11.00423574s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (71.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 service list -o json
functional_test.go:1494: Took "434.280346ms" to run "out/minikube-linux-amd64 -p functional-790363 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.166:32579
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.166:32579
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "289.464067ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "53.640769ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "266.559483ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "52.359042ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (56.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-790363 /tmp/TestFunctionalparallelMountCmdany-port2644637697/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728302283108314061" to /tmp/TestFunctionalparallelMountCmdany-port2644637697/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728302283108314061" to /tmp/TestFunctionalparallelMountCmdany-port2644637697/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728302283108314061" to /tmp/TestFunctionalparallelMountCmdany-port2644637697/001/test-1728302283108314061
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790363 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.028018ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 11:58:03.333621  384271 retry.go:31] will retry after 275.833422ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  7 11:58 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  7 11:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  7 11:58 test-1728302283108314061
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh cat /mount-9p/test-1728302283108314061
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-790363 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e4ea3387-0a8d-43b1-8ed0-a5caf15f672b] Pending
helpers_test.go:344: "busybox-mount" [e4ea3387-0a8d-43b1-8ed0-a5caf15f672b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e4ea3387-0a8d-43b1-8ed0-a5caf15f672b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e4ea3387-0a8d-43b1-8ed0-a5caf15f672b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 54.003799619s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-790363 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790363 /tmp/TestFunctionalparallelMountCmdany-port2644637697/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (56.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-790363 /tmp/TestFunctionalparallelMountCmdspecific-port31618326/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790363 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.875502ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 11:58:59.727466  384271 retry.go:31] will retry after 353.543739ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790363 /tmp/TestFunctionalparallelMountCmdspecific-port31618326/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790363 ssh "sudo umount -f /mount-9p": exit status 1 (215.299977ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-790363 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790363 /tmp/TestFunctionalparallelMountCmdspecific-port31618326/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-790363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-790363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-790363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-790363 ssh "findmnt -T" /mount1: exit status 1 (219.496896ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 11:59:01.487761  384271 retry.go:31] will retry after 519.351259ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-790363 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-790363 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2134571989/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 update-context --alsologtostderr -v=2
2024/10/07 11:59:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-790363 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-790363
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-790363
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-790363
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-628553 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1007 12:10:01.380369  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-628553 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m16.581040231s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (197.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-628553 -- rollout status deployment/busybox: (4.103267401s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-75ng4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-jhmrp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-vc5k8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-75ng4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-jhmrp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-vc5k8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-75ng4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-jhmrp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-vc5k8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-75ng4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-75ng4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-jhmrp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-jhmrp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-vc5k8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-628553 -- exec busybox-7dff88458-vc5k8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-628553 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-628553 -v=7 --alsologtostderr: (56.667018997s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-628553 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp testdata/cp-test.txt ha-628553:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553:/home/docker/cp-test.txt ha-628553-m02:/home/docker/cp-test_ha-628553_ha-628553-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m02 "sudo cat /home/docker/cp-test_ha-628553_ha-628553-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553:/home/docker/cp-test.txt ha-628553-m03:/home/docker/cp-test_ha-628553_ha-628553-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m03 "sudo cat /home/docker/cp-test_ha-628553_ha-628553-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553:/home/docker/cp-test.txt ha-628553-m04:/home/docker/cp-test_ha-628553_ha-628553-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m04 "sudo cat /home/docker/cp-test_ha-628553_ha-628553-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp testdata/cp-test.txt ha-628553-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m02:/home/docker/cp-test.txt ha-628553:/home/docker/cp-test_ha-628553-m02_ha-628553.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553 "sudo cat /home/docker/cp-test_ha-628553-m02_ha-628553.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m02:/home/docker/cp-test.txt ha-628553-m03:/home/docker/cp-test_ha-628553-m02_ha-628553-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m03 "sudo cat /home/docker/cp-test_ha-628553-m02_ha-628553-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m02:/home/docker/cp-test.txt ha-628553-m04:/home/docker/cp-test_ha-628553-m02_ha-628553-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m04 "sudo cat /home/docker/cp-test_ha-628553-m02_ha-628553-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp testdata/cp-test.txt ha-628553-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt ha-628553:/home/docker/cp-test_ha-628553-m03_ha-628553.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553 "sudo cat /home/docker/cp-test_ha-628553-m03_ha-628553.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt ha-628553-m02:/home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m02 "sudo cat /home/docker/cp-test_ha-628553-m03_ha-628553-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m03:/home/docker/cp-test.txt ha-628553-m04:/home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m04 "sudo cat /home/docker/cp-test_ha-628553-m03_ha-628553-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp testdata/cp-test.txt ha-628553-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4011994892/001/cp-test_ha-628553-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt ha-628553:/home/docker/cp-test_ha-628553-m04_ha-628553.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553 "sudo cat /home/docker/cp-test_ha-628553-m04_ha-628553.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt ha-628553-m02:/home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m02 "sudo cat /home/docker/cp-test_ha-628553-m04_ha-628553-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 cp ha-628553-m04:/home/docker/cp-test.txt ha-628553-m03:/home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 ssh -n ha-628553-m03 "sudo cat /home/docker/cp-test_ha-628553-m04_ha-628553-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-628553 node delete m03 -v=7 --alsologtostderr: (13.188517036s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (243.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-628553 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1007 12:28:04.452405  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:28:05.526857  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:30:01.380580  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-628553 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m2.508937239s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (243.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-628553 --control-plane -v=7 --alsologtostderr
E1007 12:31:42.463115  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-628553 --control-plane -v=7 --alsologtostderr: (1m17.146008424s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-628553 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-011433 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-011433 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m32.995121447s)
--- PASS: TestJSONOutput/start/Command (93.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-011433 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-011433 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.37s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-011433 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-011433 --output=json --user=testUser: (7.366513103s)
--- PASS: TestJSONOutput/stop/Command (7.37s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-579077 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-579077 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.455217ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ae40c705-2ca7-442b-9d0b-4f459e8aee78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-579077] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a92f1a1-4018-4529-9203-6d4910f12294","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19763"}}
	{"specversion":"1.0","id":"9a38c641-de05-444e-b7d6-5b1f29d085a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c301c950-6d50-41d5-9390-adbcc08445cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig"}}
	{"specversion":"1.0","id":"015b7790-9d16-4e15-9385-9d4b239ea9ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube"}}
	{"specversion":"1.0","id":"ed9c5d25-8d77-4079-a80f-05e073effa0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"10fb1081-fa49-4277-95a1-85b22ddd72da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dea50807-ced0-4fdc-b6ab-7cd73c5af10c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-579077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-579077
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-712124 --driver=kvm2  --container-runtime=crio
E1007 12:35:01.380520  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-712124 --driver=kvm2  --container-runtime=crio: (40.912256123s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-725184 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-725184 --driver=kvm2  --container-runtime=crio: (44.491654795s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-712124
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-725184
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-725184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-725184
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-725184: (1.030498088s)
helpers_test.go:175: Cleaning up "first-712124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-712124
--- PASS: TestMinikubeProfile (88.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-064527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-064527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.301159459s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-064527 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-064527 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-080825 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1007 12:36:42.462749  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-080825 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.856273539s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080825 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080825 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-064527 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080825 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080825 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-080825
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-080825: (1.290275816s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.55s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-080825
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-080825: (21.551035931s)
--- PASS: TestMountStart/serial/RestartStopped (22.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080825 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-080825 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (109.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-263097 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-263097 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m49.434247462s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (109.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-263097 -- rollout status deployment/busybox: (2.328851814s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- exec busybox-7dff88458-gm7qg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- exec busybox-7dff88458-nd9cj -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- exec busybox-7dff88458-gm7qg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- exec busybox-7dff88458-nd9cj -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- exec busybox-7dff88458-gm7qg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- exec busybox-7dff88458-nd9cj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.99s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- exec busybox-7dff88458-gm7qg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- exec busybox-7dff88458-gm7qg -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- exec busybox-7dff88458-nd9cj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-263097 -- exec busybox-7dff88458-nd9cj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-263097 -v 3 --alsologtostderr
E1007 12:40:01.381270  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-263097 -v 3 --alsologtostderr: (51.650958572s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-263097 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp testdata/cp-test.txt multinode-263097:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp multinode-263097:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3309803868/001/cp-test_multinode-263097.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp multinode-263097:/home/docker/cp-test.txt multinode-263097-m02:/home/docker/cp-test_multinode-263097_multinode-263097-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m02 "sudo cat /home/docker/cp-test_multinode-263097_multinode-263097-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp multinode-263097:/home/docker/cp-test.txt multinode-263097-m03:/home/docker/cp-test_multinode-263097_multinode-263097-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m03 "sudo cat /home/docker/cp-test_multinode-263097_multinode-263097-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp testdata/cp-test.txt multinode-263097-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp multinode-263097-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3309803868/001/cp-test_multinode-263097-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp multinode-263097-m02:/home/docker/cp-test.txt multinode-263097:/home/docker/cp-test_multinode-263097-m02_multinode-263097.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097 "sudo cat /home/docker/cp-test_multinode-263097-m02_multinode-263097.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp multinode-263097-m02:/home/docker/cp-test.txt multinode-263097-m03:/home/docker/cp-test_multinode-263097-m02_multinode-263097-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m03 "sudo cat /home/docker/cp-test_multinode-263097-m02_multinode-263097-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp testdata/cp-test.txt multinode-263097-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp multinode-263097-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3309803868/001/cp-test_multinode-263097-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp multinode-263097-m03:/home/docker/cp-test.txt multinode-263097:/home/docker/cp-test_multinode-263097-m03_multinode-263097.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097 "sudo cat /home/docker/cp-test_multinode-263097-m03_multinode-263097.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 cp multinode-263097-m03:/home/docker/cp-test.txt multinode-263097-m02:/home/docker/cp-test_multinode-263097-m03_multinode-263097-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 ssh -n multinode-263097-m02 "sudo cat /home/docker/cp-test_multinode-263097-m03_multinode-263097-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-263097 node stop m03: (1.536538288s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-263097 status: exit status 7 (447.503056ms)

                                                
                                                
-- stdout --
	multinode-263097
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-263097-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-263097-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-263097 status --alsologtostderr: exit status 7 (460.787429ms)

                                                
                                                
-- stdout --
	multinode-263097
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-263097-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-263097-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 12:40:33.212850  419492 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:40:33.212980  419492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:40:33.212987  419492 out.go:358] Setting ErrFile to fd 2...
	I1007 12:40:33.212993  419492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:40:33.213225  419492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19763-377026/.minikube/bin
	I1007 12:40:33.213497  419492 out.go:352] Setting JSON to false
	I1007 12:40:33.213529  419492 mustload.go:65] Loading cluster: multinode-263097
	I1007 12:40:33.213599  419492 notify.go:220] Checking for updates...
	I1007 12:40:33.214038  419492 config.go:182] Loaded profile config "multinode-263097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1007 12:40:33.214066  419492 status.go:174] checking status of multinode-263097 ...
	I1007 12:40:33.214550  419492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:40:33.214617  419492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:40:33.235921  419492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42967
	I1007 12:40:33.236572  419492 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:40:33.237223  419492 main.go:141] libmachine: Using API Version  1
	I1007 12:40:33.237254  419492 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:40:33.237705  419492 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:40:33.237951  419492 main.go:141] libmachine: (multinode-263097) Calling .GetState
	I1007 12:40:33.239970  419492 status.go:371] multinode-263097 host status = "Running" (err=<nil>)
	I1007 12:40:33.239995  419492 host.go:66] Checking if "multinode-263097" exists ...
	I1007 12:40:33.240519  419492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:40:33.240603  419492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:40:33.257369  419492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44213
	I1007 12:40:33.257946  419492 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:40:33.258542  419492 main.go:141] libmachine: Using API Version  1
	I1007 12:40:33.258572  419492 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:40:33.258930  419492 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:40:33.259138  419492 main.go:141] libmachine: (multinode-263097) Calling .GetIP
	I1007 12:40:33.262237  419492 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:40:33.262706  419492 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:40:33.262746  419492 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:40:33.262932  419492 host.go:66] Checking if "multinode-263097" exists ...
	I1007 12:40:33.263267  419492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:40:33.263319  419492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:40:33.279697  419492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42135
	I1007 12:40:33.280266  419492 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:40:33.280808  419492 main.go:141] libmachine: Using API Version  1
	I1007 12:40:33.280833  419492 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:40:33.281173  419492 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:40:33.281373  419492 main.go:141] libmachine: (multinode-263097) Calling .DriverName
	I1007 12:40:33.281569  419492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:40:33.281594  419492 main.go:141] libmachine: (multinode-263097) Calling .GetSSHHostname
	I1007 12:40:33.284419  419492 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:40:33.284901  419492 main.go:141] libmachine: (multinode-263097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:f6:ad", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:37:51 +0000 UTC Type:0 Mac:52:54:00:76:f6:ad Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-263097 Clientid:01:52:54:00:76:f6:ad}
	I1007 12:40:33.284931  419492 main.go:141] libmachine: (multinode-263097) DBG | domain multinode-263097 has defined IP address 192.168.39.213 and MAC address 52:54:00:76:f6:ad in network mk-multinode-263097
	I1007 12:40:33.285114  419492 main.go:141] libmachine: (multinode-263097) Calling .GetSSHPort
	I1007 12:40:33.285317  419492 main.go:141] libmachine: (multinode-263097) Calling .GetSSHKeyPath
	I1007 12:40:33.285515  419492 main.go:141] libmachine: (multinode-263097) Calling .GetSSHUsername
	I1007 12:40:33.285701  419492 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/multinode-263097/id_rsa Username:docker}
	I1007 12:40:33.362885  419492 ssh_runner.go:195] Run: systemctl --version
	I1007 12:40:33.370255  419492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:40:33.387781  419492 kubeconfig.go:125] found "multinode-263097" server: "https://192.168.39.213:8443"
	I1007 12:40:33.387825  419492 api_server.go:166] Checking apiserver status ...
	I1007 12:40:33.387872  419492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:40:33.410668  419492 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1115/cgroup
	W1007 12:40:33.424244  419492 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1115/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1007 12:40:33.424317  419492 ssh_runner.go:195] Run: ls
	I1007 12:40:33.431268  419492 api_server.go:253] Checking apiserver healthz at https://192.168.39.213:8443/healthz ...
	I1007 12:40:33.436598  419492 api_server.go:279] https://192.168.39.213:8443/healthz returned 200:
	ok
	I1007 12:40:33.436638  419492 status.go:463] multinode-263097 apiserver status = Running (err=<nil>)
	I1007 12:40:33.436654  419492 status.go:176] multinode-263097 status: &{Name:multinode-263097 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:40:33.436699  419492 status.go:174] checking status of multinode-263097-m02 ...
	I1007 12:40:33.437024  419492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:40:33.437097  419492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:40:33.453557  419492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I1007 12:40:33.454017  419492 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:40:33.454504  419492 main.go:141] libmachine: Using API Version  1
	I1007 12:40:33.454527  419492 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:40:33.454862  419492 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:40:33.455136  419492 main.go:141] libmachine: (multinode-263097-m02) Calling .GetState
	I1007 12:40:33.456777  419492 status.go:371] multinode-263097-m02 host status = "Running" (err=<nil>)
	I1007 12:40:33.456795  419492 host.go:66] Checking if "multinode-263097-m02" exists ...
	I1007 12:40:33.457094  419492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:40:33.457140  419492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:40:33.473467  419492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I1007 12:40:33.473876  419492 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:40:33.474361  419492 main.go:141] libmachine: Using API Version  1
	I1007 12:40:33.474387  419492 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:40:33.474717  419492 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:40:33.474889  419492 main.go:141] libmachine: (multinode-263097-m02) Calling .GetIP
	I1007 12:40:33.477413  419492 main.go:141] libmachine: (multinode-263097-m02) DBG | domain multinode-263097-m02 has defined MAC address 52:54:00:59:09:d3 in network mk-multinode-263097
	I1007 12:40:33.477798  419492 main.go:141] libmachine: (multinode-263097-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:09:d3", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:38:51 +0000 UTC Type:0 Mac:52:54:00:59:09:d3 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-263097-m02 Clientid:01:52:54:00:59:09:d3}
	I1007 12:40:33.477827  419492 main.go:141] libmachine: (multinode-263097-m02) DBG | domain multinode-263097-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:59:09:d3 in network mk-multinode-263097
	I1007 12:40:33.477974  419492 host.go:66] Checking if "multinode-263097-m02" exists ...
	I1007 12:40:33.478292  419492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:40:33.478335  419492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:40:33.494400  419492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36227
	I1007 12:40:33.494888  419492 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:40:33.495459  419492 main.go:141] libmachine: Using API Version  1
	I1007 12:40:33.495485  419492 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:40:33.495805  419492 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:40:33.496013  419492 main.go:141] libmachine: (multinode-263097-m02) Calling .DriverName
	I1007 12:40:33.496184  419492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:40:33.496201  419492 main.go:141] libmachine: (multinode-263097-m02) Calling .GetSSHHostname
	I1007 12:40:33.498884  419492 main.go:141] libmachine: (multinode-263097-m02) DBG | domain multinode-263097-m02 has defined MAC address 52:54:00:59:09:d3 in network mk-multinode-263097
	I1007 12:40:33.499316  419492 main.go:141] libmachine: (multinode-263097-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:09:d3", ip: ""} in network mk-multinode-263097: {Iface:virbr1 ExpiryTime:2024-10-07 13:38:51 +0000 UTC Type:0 Mac:52:54:00:59:09:d3 Iaid: IPaddr:192.168.39.239 Prefix:24 Hostname:multinode-263097-m02 Clientid:01:52:54:00:59:09:d3}
	I1007 12:40:33.499347  419492 main.go:141] libmachine: (multinode-263097-m02) DBG | domain multinode-263097-m02 has defined IP address 192.168.39.239 and MAC address 52:54:00:59:09:d3 in network mk-multinode-263097
	I1007 12:40:33.499531  419492 main.go:141] libmachine: (multinode-263097-m02) Calling .GetSSHPort
	I1007 12:40:33.499706  419492 main.go:141] libmachine: (multinode-263097-m02) Calling .GetSSHKeyPath
	I1007 12:40:33.499890  419492 main.go:141] libmachine: (multinode-263097-m02) Calling .GetSSHUsername
	I1007 12:40:33.500009  419492 sshutil.go:53] new ssh client: &{IP:192.168.39.239 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19763-377026/.minikube/machines/multinode-263097-m02/id_rsa Username:docker}
	I1007 12:40:33.579748  419492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:40:33.599624  419492 status.go:176] multinode-263097-m02 status: &{Name:multinode-263097-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1007 12:40:33.599672  419492 status.go:174] checking status of multinode-263097-m03 ...
	I1007 12:40:33.600012  419492 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1007 12:40:33.600071  419492 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1007 12:40:33.616893  419492 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I1007 12:40:33.617640  419492 main.go:141] libmachine: () Calling .GetVersion
	I1007 12:40:33.618277  419492 main.go:141] libmachine: Using API Version  1
	I1007 12:40:33.618297  419492 main.go:141] libmachine: () Calling .SetConfigRaw
	I1007 12:40:33.618635  419492 main.go:141] libmachine: () Calling .GetMachineName
	I1007 12:40:33.618839  419492 main.go:141] libmachine: (multinode-263097-m03) Calling .GetState
	I1007 12:40:33.620483  419492 status.go:371] multinode-263097-m03 host status = "Stopped" (err=<nil>)
	I1007 12:40:33.620501  419492 status.go:384] host is not running, skipping remaining checks
	I1007 12:40:33.620507  419492 status.go:176] multinode-263097-m03 status: &{Name:multinode-263097-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-263097 node start m03 -v=7 --alsologtostderr: (37.801550951s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-263097 node delete m03: (1.620844797s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (199.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-263097 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1007 12:50:01.380773  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 12:51:42.462148  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-263097 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m18.647526953s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-263097 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (199.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-263097
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-263097-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-263097-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (71.522241ms)

                                                
                                                
-- stdout --
	* [multinode-263097-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-263097-m02' is duplicated with machine name 'multinode-263097-m02' in profile 'multinode-263097'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-263097-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-263097-m03 --driver=kvm2  --container-runtime=crio: (44.383978056s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-263097
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-263097: exit status 80 (231.277353ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-263097 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-263097-m03 already exists in multinode-263097-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-263097-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.59s)

                                                
                                    
x
+
TestScheduledStopUnix (119.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-847666 --memory=2048 --driver=kvm2  --container-runtime=crio
E1007 12:56:42.462510  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-847666 --memory=2048 --driver=kvm2  --container-runtime=crio: (47.599457712s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847666 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-847666 -n scheduled-stop-847666
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847666 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1007 12:56:46.654850  384271 retry.go:31] will retry after 72.255µs: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.656045  384271 retry.go:31] will retry after 82.88µs: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.657195  384271 retry.go:31] will retry after 288.228µs: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.658321  384271 retry.go:31] will retry after 193.209µs: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.659472  384271 retry.go:31] will retry after 443.447µs: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.660634  384271 retry.go:31] will retry after 447.007µs: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.661806  384271 retry.go:31] will retry after 1.240901ms: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.664043  384271 retry.go:31] will retry after 1.861971ms: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.666310  384271 retry.go:31] will retry after 1.988119ms: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.668645  384271 retry.go:31] will retry after 2.179237ms: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.671903  384271 retry.go:31] will retry after 4.647477ms: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.677117  384271 retry.go:31] will retry after 5.101236ms: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.683382  384271 retry.go:31] will retry after 14.359079ms: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.698701  384271 retry.go:31] will retry after 11.950504ms: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
I1007 12:56:46.711064  384271 retry.go:31] will retry after 42.728767ms: open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/scheduled-stop-847666/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847666 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-847666 -n scheduled-stop-847666
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-847666
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-847666 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-847666
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-847666: exit status 7 (79.231849ms)

                                                
                                                
-- stdout --
	scheduled-stop-847666
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-847666 -n scheduled-stop-847666
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-847666 -n scheduled-stop-847666: exit status 7 (69.526421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-847666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-847666
--- PASS: TestScheduledStopUnix (119.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (220.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.258183461 start -p running-upgrade-872700 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.258183461 start -p running-upgrade-872700 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.341974238s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-872700 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-872700 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.376128045s)
helpers_test.go:175: Cleaning up "running-upgrade-872700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-872700
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-872700: (1.25187404s)
--- PASS: TestRunningBinaryUpgrade (220.35s)

                                                
                                    
x
+
TestPause/serial/Start (56.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-614270 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-614270 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (56.048674112s)
--- PASS: TestPause/serial/Start (56.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (179.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1559889354 start -p stopped-upgrade-753355 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1559889354 start -p stopped-upgrade-753355 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m45.04652156s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1559889354 -p stopped-upgrade-753355 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1559889354 -p stopped-upgrade-753355 stop: (2.166767137s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-753355 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1007 13:00:01.380601  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-753355 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.515569647s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (179.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (84.12s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-614270 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-614270 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.082158755s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (84.12s)

                                                
                                    
x
+
TestPause/serial/Pause (0.98s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-614270 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-614270 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-614270 --output=json --layout=cluster: exit status 2 (294.982372ms)

                                                
                                                
-- stdout --
	{"Name":"pause-614270","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-614270","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-614270 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.98s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-614270 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.98s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.07s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-614270 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-614270 --alsologtostderr -v=5: (1.072198819s)
--- PASS: TestPause/serial/DeletePaused (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-226737 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-226737 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (86.464141ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-226737] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19763
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19763-377026/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19763-377026/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (56.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-226737 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-226737 --driver=kvm2  --container-runtime=crio: (56.084218494s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-226737 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (56.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-753355
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (47.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-226737 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1007 13:01:24.457040  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/addons-246818/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:01:25.530415  384271 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19763-377026/.minikube/profiles/functional-790363/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-226737 --no-kubernetes --driver=kvm2  --container-runtime=crio: (45.896288511s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-226737 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-226737 status -o json: exit status 2 (270.115581ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-226737","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-226737
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (47.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-226737 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-226737 --no-kubernetes --driver=kvm2  --container-runtime=crio: (50.22767481s)
--- PASS: TestNoKubernetes/serial/Start (50.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-226737 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-226737 "sudo systemctl is-active --quiet service kubelet": exit status 1 (230.309193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-226737
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-226737: (1.363203568s)
--- PASS: TestNoKubernetes/serial/Stop (1.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-226737 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-226737 --driver=kvm2  --container-runtime=crio: (42.031821938s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-226737 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-226737 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.338712ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    

Test skip (34/228)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.39s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:979: (dbg) Run:  out/minikube-linux-amd64 -p addons-246818 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.39s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard